Search Results

Search found 22040 results on 882 pages for 'process improvement'.

Page 154/882 | < Previous Page | 150 151 152 153 154 155 156 157 158 159 160 161  | Next Page >

  • Case Management In-Depth: Cases & Case Activities Part 1 – Activity Scope by Mark Foster

    - by JuergenKress
    In the previous blog entry we looked at stakeholders and permissions, i.e. how we control interaction with the case and its artefacts. In this entry we’ll look at case activities, specifically how we decide their scope, in the next part we’ll look at how these activities relate to the over-arching case and how we can effectively visualize the relationship between the case and its activities. Case Activities As mentioned in an earlier blog entry, case activities can be created from: BPM processes Human Tasks Custom (Java Code) It is pretty obvious that we would use custom case activities when either: we already have existing code that we would like to form part of a case we cannot provide the necessary functionality with a BPM process or simple Human Task However, how do we determine what our BPM process as a case activity contains? What level of granularity? Take the following simple BPM process Read the full article here. SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Facebook Wiki Mix Forum Technorati Tags: ACM,BPM,Mark Foster,SOA Community,Oracle SOA,Oracle BPM,Community,OPN,Jürgen Kress

    Read the article

  • Configuration Manager Setting Causing Error PRJ0019

    - by Jeff Paterno
    Recently I ran into an issue with a project failing to build on an automated build server using CruiseControl. When I looked into the build log I saw that the Post-Build project was failing with the error message: "error PRJ0019: A tool returned an error code from "Performing Post-Build Event..." This was most frustrating especially since the solution was building without issue on my local development environment. The Post-Build project was a C++ project that basically called several batch files to unregister/register assemblies, copy resources and supporting files, and place other dependencies in the GAC. I decided to run each of the batch files manually to see if that would provide more information as to why this project was failing. This lead me to determine that the batch file that was placing assemblies in the GAC was the culprit and that it was failing to find a particular assembly. The missing assembly was the output of another project. The project that was not producing the expected output was another C++ project that called a batch file. This batch process was actually embedding resource files into an assembly and then copying the assembly to the expected location. The real confusion started when I looked back into my Subversion log and noted that nothing had changed in this project in more than 2 months! It was almost as if the project had stopped building altogether. But what would cause that?! The Configuration Manager, obviously! Checking the solution's Configuration Manager settings, I found that the project that was not producing any output was in fact not selected to be part of the build process when the "Any CPU" platform was selected. This was the problem! I had recently updated the CruiseControl configurations to force the solution to be built targeting the platform "Any CPU". As a result, the project that was at the root of the problem was not configured to be built and the post-build process was failing when it couldn't find what it needed.

    Read the article

  • Steam (via Wine) crashes after login - any troubleshooting tips?

    - by new Thrall
    The Steam client crashes after I login. After submitting my credentials, Steam displays a dialog box informing me the client is 'connecting'. I then see the Steam main page and news page displayed for roughly a second or so before the client crashes. I'm running Ubuntu 10.10 (I think..what's the best way to verify? uname only displays Linux) which I've installed on a usb flash drive using a capser-rw file for persistence. wine-1.3.14 I'm not sure how to troubleshoot. How do I identify if the problem is with wine or steam or the video card driver, or what? Any ideas? hardware: motherboard: ECS Elitegroup 945GCT-M sound: integrated audio video: ATI Radeon X1950 console output: ubuntu@ubuntu:~/.wine/drive_c/Program Files/Steam$ wine Steam.exe fixme:process:GetLogicalProcessorInformation ((nil),0x32e488): stub fixme:process:GetLogicalProcessorInformation (0x1010c00,0x32e488): stub fixme:process:SetProcessShutdownParameters (00000100, 00000000): partial stub. fixme:urlmon:CoInternetSetFeatureEnabled 5, 0x00000002, 1, stub fixme:urlmon:CoInternetSetFeatureEnabled 10, 0x00000002, 1, stub fixme:dwmapi:DwmSetWindowAttribute (0x1009a, 2, 0x32d334, 4) stub fixme:dwmapi:DwmSetWindowAttribute (0x1009a, 3, 0x32d338, 4) stub fixme:dwmapi:DwmSetWindowAttribute (0x1009a, 4, 0x32d33c, 4) stub fixme:dwmapi:DwmSetWindowAttribute (0x100a2, 2, 0x32d964, 4) stub fixme:dwmapi:DwmSetWindowAttribute (0x100a2, 3, 0x32d968, 4) stub fixme:dwmapi:DwmSetWindowAttribute (0x100a2, 4, 0x32d96c, 4) stub err:ole:CoGetClassObject class {77f10cf0-3db5-4966-b520-b7c54fd35ed6} not registered err:ole:CoGetClassObject no class object {77f10cf0-3db5-4966-b520-b7c54fd35ed6} could be created for context 0x1 fixme:wbemprox:wbem_locator_ConnectServer 0x1ab5f0, L"ROOT\CIMV2", (null), (null), (null), 0x00000080, (null), (nil), 0x42bbee8) fixme:dwmapi:DwmSetWindowAttribute (0x100ae, 2, 0x32d8cc, 4) stub fixme:dwmapi:DwmSetWindowAttribute (0x100ae, 3, 0x32d8d0, 4) stub fixme:dwmapi:DwmSetWindowAttribute (0x100ae, 4, 0x32d8d4, 4) stub fixme:dwmapi:DwmSetWindowAttribute (0x100b6, 2, 0x32d80c, 4) stub fixme:dwmapi:DwmSetWindowAttribute (0x100b6, 3, 0x32d810, 4) stub fixme:dwmapi:DwmSetWindowAttribute (0x100b6, 4, 0x32d814, 4) stub fixme:dwmapi:DwmSetWindowAttribute (0x100c0, 2, 0x32d2e4, 4) stub fixme:dwmapi:DwmSetWindowAttribute (0x100c0, 3, 0x32d2e8, 4) stub fixme:dwmapi:DwmSetWindowAttribute (0x100c0, 4, 0x32d2ec, 4) stub fixme:winhttp:WinHttpGetIEProxyConfigForCurrentUser returning no proxy used fixme:dwmapi:DwmSetWindowAttribute (0x100dc, 2, 0x32d94c, 4) stub fixme:dwmapi:DwmSetWindowAttribute (0x100dc, 3, 0x32d950, 4) stub fixme:dwmapi:DwmSetWindowAttribute (0x100dc, 4, 0x32d954, 4) stub fixme:dwmapi:DwmSetWindowAttribute (0x10118, 2, 0x32da8c, 4) stub fixme:dwmapi:DwmSetWindowAttribute (0x10118, 3, 0x32da90, 4) stub fixme:dwmapi:DwmSetWindowAttribute (0x10118, 4, 0x32da94, 4) stub fixme:dwmapi:DwmSetWindowAttribute (0x10122, 2, 0x32d514, 4) stub fixme:dwmapi:DwmSetWindowAttribute (0x10122, 3, 0x32d518, 4) stub fixme:dwmapi:DwmSetWindowAttribute (0x10122, 4, 0x32d51c, 4) stub fixme:dbghelp:elf_search_auxv can't find symbol in module

    Read the article

  • Convert MP3 to AAC,FLAC to AAC (.NET/C#) FREE :)

    - by PearlFactory
    So I was tasked with looking at converting 10 million tracks from mp3 320k to AAC and also Converting from mp3 320k to mp3 128k After a bit of hunting around the tool you need to use is FFMPEG Download x64 WindowsAlso for the best results get the Nero AACEncoder Download Now the command line STEP 1(From Flac)ffmpeg -i input.flac -f wav - | neroAacEnc -ignorelength -q 0.5 -if - -of output.m4aor (From mp3)ffmpeg -i input.mp3 -f wav - | neroAacEnc -ignorelength -q 0.5 -if - -of output.m4aNow the output.m4a is a intermediate state that we now put a ACC wrapper on via FFMpeg STEP 2ffmpeg -i output.m4a -vn -acodec copy final.aacDone :) There are a couple of options with the FFMPEG library as in we can look at importing the librarys and manipulation the API for the direct result FFMPEG has this support. You can get the relevant librarys from HereThey even have the source if you are that keen :-)In this case I am going to wrap the command lines into c# external process threads.( For the app that i am building to convert the 10 million tracks there is a complex multithreaded app to support this novel code )//Arrange Metadata about Call Process myProcess = new Process();ProcessStartInfo p = new ProcessStartInfo();string sArgs = string.format(" -i {0} -f wav - | neroAacEnc -ignorelength -q 0.5 -if - -of {1}",inputfile,outputfil) ; p.FileName = "ffmpeg.exe" ; p.CreateNoWindow = true; p.RedirectStandardOutput = true; //p.WindowStyle = ProcessWindowStyle.Normal p.UseShellExecute = false;//Execute p.Arguments = sArgs; myProcess.StartInfo = p; myProcess.Start(); myProcess.WaitForExit();//Write details about call  myProcess.StandardOutput.ReadToEnd();Now in this case we would execute a 2nd call using the same code but with different sArgs to put the AAC wrapper on the m4a file. Thats it. So if you need to do some conversions of any kind for you ASP.net sites/apps this is a great start and super fast.. With conversion times of around 2-3 seconds all of this can be done on the fly:-)Justin Oehlmannref : StackOverflow.com

    Read the article

  • Ubuntu 12.10 boots to purple or black screen but intermittently boots fine

    - by Nic
    I have a fresh install of Ubuntu 12.10 64bit dual booting with Win7 64bit. Windows boots fine every time. When I choose Ubuntu from Grub2 menu it will sometimes boot just fine. Most of the times though it gets stuck at a purple screen with nothing happening and no keys or key combinations working. Other times instead of the purple screen I get a black screen with a flashing cursor at the top. Nothing happens. I need to hold down the power button to restart and after a couple times of trying it will eventually boot into Ubuntu. Once that happens everything runs without any problems. I have tried different approaches to fix the problem but to no avail. I tried removing "quiet splash", used no splash, and nomodeset What I got from this was seeing all the text of the boot process but more often than not the process gets stuck right after recognizing all the USB ports and devices. If it gets stuck nothing happens (except when i plug in a usb device: it still recognizes it with a new line of text) In the case when the boot process works, after it lists the usb devices it tells me something like: recovery of read-only filesystem necessary. (its the filesystem that ubuntu runs on) then it does the recovery and i get: recovery complete. after that Ubuntu will boot properly and I get to see the login screen. I have no idea what to do to fix that problem. I have to reboot 3 to 5 times everytime I want to get into Ubuntu and I feel like I'm breaking my new Laptop. (its a lenovo ideapad z580 btw. i5 processor and nvidia gtx640 graphics card) I hope someone can help me. Thanks. Edit: i just got a "failed to enable AA error" message when waking it up from suspend. I don't know if that helps or has anything to do with the boot probs.

    Read the article

  • Unit Testing TSQL

    - by Grant Fritchey
    I went through a period of time where I spent a lot of effort figuring out how to set up unit tests for TSQL. It wasn't easy. There are a few tools out there that help, but mostly it involves lots of programming. well, not as much as before. Thanks to the latest Down Tools Week at Red Gate a new utility has been built and released into the wild, SQL Test. Like a lot of the new tools coming out of Red Gate these days, this one is directly integrated into SSMS, which means you're working where you're comfortable and where you already have lots of tools at your disposal. After the install, when you launch SSMS and get connected, you're prompted to install the tSQLt example database. Go for it. It's a quick way to see how the tool works. I'd suggest using it. It' gives you a quick leg up. The concepts are pretty straight forward. There are a series of CLR commands that you use to configure a test and the test assertions. In between you're calling TSQL, either calls to your structure, queries, or stored procedures. They already have the one things that I always found wanting in database tests, a way to compare tables of results. I also like the ability to create a dummy copy of tables for the tests. It lets you control structures and behaviors so that the tests are more focused. One of the issues I always ran into with the other testing tools is that setting up the tests might require potentially destructive changes to the structure of the database (dropping FKs, etc.) which added lots of time and effort to setting up the tests, making testing more difficult, and therefor, less useful. Functionally, this is pretty similar to the Visual Studio tests and TSQLUnit tests that I used to use. The primary improvement over the Visual Studio tests is that I'm working in SSMS instead of Visual Studio. The primary improvement over TSQLUnit is the SQL Test interface it self. A lot of the functionality is the same, but having a sweet little tool to manage & run the tests from makes a huge difference. Oh, and don't worry. You can still run these tests directly from TSQL too, so automation has not gone away. I'm still thinking about how I'd use this in a dev environment where I also had source control to fret. That might be another blog post right there. I'm just getting started with SQL Test, so this is the first of several blog posts & videos. Watch this space. Try the tool.

    Read the article

  • Why is the Oracle Specialization Program important for Your Fusion Middleware Implementation?

    - by JuergenKress
    Why is Specialization important for Oracle customers? Specialized partners are certified by Oracle with proven references and skills. In each Oracle Fusion Middleware product the partner who specialized had to proof successful implementations and certified consultants to achieve the Specialization status. By working with Specialized partners, your middleware project will be more successful. In EMEA we have more than 3425 partners Specialized in Oracle Fusion Middleware. How to find the right Specialized partner? At Oracle.com/Specialized you and Oracle customers can search for Specialized partners by: OFM Product Country of Partner Quote from IPT ” SOA Specialization is a great branding for IPT. We are the SOA Specialists in the Swiss market, as we focus all our services around SOA. With 65 Swiss consultants focused on SOA Security & SOA Testing & Business Process Management – Business Process Management & BSM – Business Service Modeling the partnership with Oracle as the technology leader in SOA is key, therefore it was important to us to become the first SOA Specialized company in Switzerland. As a result IPT is mentioned by Gartner as one of eight European SOA Consulting Firms and included in „Guide to SOA Consulting and System Integration Service Providers“ Thomas Schaller, Partner IPT. Do you want to become a Specialized partner? Make sure you join the SOA & Business Process Management Community. SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Facebook Wiki Mix Forum Technorati Tags: Specialization,Specialization Benefits,Marketing,SOA Community,Oracle SOA,Oracle BPM,Community,OPN,Jürgen Kress

    Read the article

  • Task scheduler does not kill task

    - by Andomar
    We have a scheduled task that sometimes hangs. It just stops responding. On Windows 2003, we had task scheduler configured to kill the task after 3 hours. It's a 32-bit process. On Windows 2008 R2, we've set "Stop the task if it runs longer than" and "If the running task does not end when requested, force it to stop". However, when the task hangs, it is never stopped, and stays in process explorer for days. Any clue why Windows Scheduler would not kill a process? (This post has a reproducible setup for this issue.)

    Read the article

  • Exim queue in WHM

    - by Xobb
    Hi fellas, I've got the centos server with WHM. The mail server is exim. I need exim put all messages in queue and not sending directly.Though I've added the queue_only option to exim configuration and the messages are collected in the queue now. Afterwards I've found out that someone is calling exim -q to process the queue every once in a while. I've found the following cron job: 0 6 * * * /scripts/exim_tidydb > /dev/null 2>&1 which I beleive has been used to process the exim queue. Also I suspect that script was installed alongside with WHM. Surely I've commented it out and was expecting everything to work just fine. But that didn't happen. I still get the exim queue processed once in a while. Am I missing anything? What may cause my exim queue to process? Here is cat /etc/exim.conf | grep queue queue_only deliver_queue_load_max = 3 Thanks

    Read the article

  • Is it normal for Java /Tomcat to keep checking for java_pid<nnnn>.hprof?

    - by Chris
    I was monitoring my JVM running Apache Tomcat 6, running on Windows, and I noticed that every 3 seconds or so the JVM process (C:\Tomcat\bin\tomcat6.exe) is polling to see whether or not C:\Tomcatcat\java_pid3748.hprof exists, where 3748 is the Windows process ID. I haven't seen write to the hprof file, just test for existence. (I'm using Sysinternals Process Monitor (procmon.exe) for this monitoring. In procmon the polling shows up as a QueryDirectory operation, which always returns Result "NO SUCH FILE".) Is this normal, or is this a potential red flag? I gather that these hprof files are generated, perhaps among other times, when you enable the -XX:+HeapDumpOnOutOfMemoryError Java flag. I haven't enabled it myself, though I guess it could be enabled somehow in the Tomcat startup scripts.

    Read the article

  • libc-bin errors when trying to install php

    - by jonney
    i am trying to update and install php into my ubuntu server 12.04 using the command below: apt-get upgrade php apt-get install php5-curl php5-gd php5-mysql php5-pgsql However i receive this error all the time: gzip: stdout: No space left on device E: mkinitramfs failure cpio 141 gzip 1 update-initramfs: failed for /boot/initrd.img-3.2.0-34-generic with 1. run-parts: /etc/kernel/postinst.d/initramfs-tools exited with return code 1 Failed to process /etc/kernel/postinst.d at /var/lib/dpkg/info/linux-image-3.2.0-34-generic.postinst line 1010. dpkg: error processing linux-image-3.2.0-34-generic (--configure): subprocess installed post-installation script returned error exit status 2 dpkg: dependency problems prevent configuration of linux-image-server: linux-image-server depends on linux-image-3.2.0-33-generic; however: Package linux-image-3.2.0-33-generic is not configured yet. dpkg: error processing linux-image-server (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of linux-server: linux-server depends on linux-image-server (= 3.2.0.33.36); however: Package linux-image-server is not configured yet. dpkg: error processing linux-server (--configure): dependency problems - leaving unconfigured Setting up libpq5 (9.1.10-0ubuntu12.04) ... No apport report written because the error message indicates it's a follow-up error from a previous failure. No apport report written because MaxReports has already been reached Setting up php5-curl (5.3.10-1ubuntu3.8) ... Setting up php5-pgsql (5.3.10-1ubuntu3.8) ... Processing triggers for initramfs-tools ... update-initramfs: Generating /boot/initrd.img-3.2.0-32-generic gzip: stdout: No space left on device E: mkinitramfs failure cpio 141 gzip 1 update-initramfs: failed for /boot/initrd.img-3.2.0-32-generic with 1. dpkg: error processing initramfs-tools (--configure): subprocess installed post-installation script returned error exit status 1 No apport report written because MaxReports has already been reached Processing triggers for libc-bin ... ldconfig deferred processing now taking place Errors were encountered while processing: linux-image-3.2.0-33-generic linux-image-3.2.0-34-generic linux-image-server linux-server initramfs-tools E: Sub-process /usr/bin/dpkg returned an error code (1) Not sure whats wrong and why it cant process the linux-image files?

    Read the article

  • Agile Testing Days 2012 – Day 3 – Agile or agile?

    - by Chris George
    Another early start for my last Lean Coffee of the conference, and again it was not wasted. We had some really interesting discussions around how to determine what test automation is useful, if agile is not faster, why do it? and a rather existential discussion on whether unicorns exist! First keynote of the day was entitled “Fast Feedback Teams” by Ola Ellnestam. Again this relates nicely to the releasing faster talk on day 2, and something that we are looking at and some teams are actively trying. Introducing the notion of feedback, Ola describes a game he wrote for his eldest child. It was a simple game where every time he clicked a button, it displayed “You’ve Won!”. He then changed it to be a Win-Lose-Win-Lose pattern and watched the feedback from his son who then twigged the pattern and got his younger brother to play, alternating turns… genius! (must do that with my children). The idea behind this was that you need that feedback loop to learn and progress. If you are not getting the feedback you need to close that loop. An interesting point Ola made was to solve problems BEFORE writing software. It may be that you don’t have to write anything at all, perhaps it’s a communication/training issue? Perhaps the problem can be solved another way. Writing software, although it’s the business we are in, is expensive, and this should be taken into account. He again mentions frequent releases, and how they should be made as soon as stuff is ready to be released, don’t leave stuff on the shelf cause it’s not earning you anything, money or data. I totally agree with this and it’s something that we will be aiming for moving forwards. “Exceptions, Assumptions and Ambiguity: Finding the truth behind the story” by David Evans started off very promising by making references to ‘Grim up North’ referring to the north of England. Not sure it was appreciated by most of the audience, but it made me laugh! David explained how there are always risks associated with exceptions, giving the example of a one-way road near where he lives, with an exception sign giving rights to coaches to go the wrong way. Therefore you could merrily swing around the corner of the one way road straight into a coach! David showed the danger in making assumptions with lyrical quotes from Lola by The Kinks “I’m glad I’m a man, and so is Lola” and with a picture of a toilet flush that needed instructions to operate the full and half flush. With this particular flush, you pulled the handle all the way down to half flush, and half way down to full flush! hmmm, a bit of a crappy user experience methinks! Then through a clever use of a passage from the Jabberwocky, David then went onto show how mis-translation/ambiguity is the can completely distort the original meaning of something, and this is a real enemy of software development. This was all helping to demonstrate that the term Story is often heavily overloaded in the Agile world, and should really be stripped back to what it is really for, stating a business problem, and offering a technical solution. Therefore a story could be worded as “In order to {make some improvement}, we will { do something}”. The first ‘in order to’ statement is stakeholder neutral, and states the problem through requesting an improvement to the software/process etc. The second part of the story is the verb, the doing bit. So to achieve the ‘improvement’ which is not currently true, we will do something to make this true in the future. My PM is very interested in this, and he’s observed some of the problems of overloading stories so I’m hoping between us we can use some of David’s suggestions to help clarify our stories better. The second keynote of the day (and our last) proved to be the most entertaining and exhausting of the conference for me. “The ongoing evolution of testing in agile development” by Scott Barber. I’ve never had the pleasure of seeing Scott before… OMG I would love to have even half of the energy he has! What struck me during this presentation was Scott’s explanation of how testing has become the role/job that it is (largely) today, and how this has led to the need for ‘methodologies’ to make dev and test work! The argument that we should be trying to converge the roles again is a very valid one, and one that a couple of the teams at work are actively doing with great results. Making developers as responsible for quality as testers is something that has been lost over the years, but something that we are now striving to achieve. The idea that we (testers) should be testing experts/specialists, not testing ‘union members’, supports this idea so the entire team works on all aspects of a feature/product, with the ‘specialists’ taking the lead and advising/coaching the others. This leads to better propagation of information around the team, a greater holistic understanding of the project and it allows the team to continue functioning if some of it’s members are off sick, for example. Feeling somewhat drained from Scott’s keynote (but at the same time excited that alot of the points he raised supported actions we are taking at work), I headed into my last presentation for Agile Testing Days 2012 before having to make my way to Tegel to catch the flight home. “Thinking and working agile in an unbending world” with Pete Walen was a talk I was not going to miss! Having spoken to Pete several times during the past few days, I was looking forward to hearing what he was going to say, and I was not disappointed. Pete started off by trying to separate the definitions of ‘Agile’ as in the methodology, and ‘agile’ as in the adjective by pronouncing them the ‘english’ and ‘american’ ways. So Agile pronounced (Ajyle) and agile pronounced (ajul). There was much confusion around what the hell he was talking about, although I thought it was quite clear. Agile – Software development methodology agile – Marked by ready ability to move with quick easy grace; Having a quick resourceful and adaptable character. Anyway, that aside (although it provided a few laughs during the presentation), the point was that many teams that claim to be ‘Agile’ but are not, in fact, ‘agile’ by nature. Implementing ‘Agile’ methodologies that are so prescriptive actually goes against the very nature of Agile development where a team should anticipate, adapt and explore. Pete made a valid point that very few companies intentionally put up roadblocks to impede work, so if work is being blocked/delayed, why? This is where being agile as a team pays off because the team can inspect what’s going on, explore options and adapt their processes. It is through experimentation (and that means trying and failing as well as trying and succeeding) that a team will improve and grow leading to focussing on what really needs to be done to achieve X. So, that was it, the last talk of our conference. I was gutted that we had to miss the closing keynote from Matt Heusser, as Matt was another person I had spoken too a few times during the conference, but the flight would not wait, and just as well we left when we did because the traffic was a nightmare! My Takeaway Triple from Day 3: Release often and release small – don’t leave stuff on the shelf Keep the meaning of the word ‘agile’ in mind when working in ‘Agile Look at testing as more of a skill than a role  

    Read the article

  • As an indie game dev, what processes are the best for soliciting feedback on my design/spec/idea? [closed]

    - by Jess Telford
    Background I have worked in a professional environment where the process usually goes like the following: Brain storm idea Solidify the game mechanics / design Iterate on design/idea to create a more solid experience Spec out the details of the design/idea Build it Step 3. is generally done with the stakeholders of the game (developers, designers, investors, publishers, etc) to reach an 'agreement' which meets the goals of all involved. Due to this process involving a series of often opposing and unique view points, creative solutions can surface through discussion / iteration. This is backed up by a process for collating the changes / new ideas, as well as structured time for discussion. As a (now) indie developer, I have to play the role of all the stakeholders (developers, designers, investors, publishers, etc), and often find myself too close to the idea / design to do more than minor changes, which I feel to be local maxima when it comes to the best result (I'm looking for the global maxima, of course). I have read that ideas / game designs / unique mechanics are merely multipliers of execution, and that keeping them secret is just silly. In sharing the idea with others outside the realm of my own thinking, I hope to replicate the influence other stakeholders have. I am struggling with the collation of changes / new ideas, and any kind of structured method of receiving feedback. My question: As an indie game developer, how and where can I share my ideas/designs to receive meaningful / constructive feedback? How can I successfully collate the feedback into a new iteration of the design? Are there any specialized websites, etc?

    Read the article

  • Logstash shipper & server on the samebox

    - by keftes
    I'm trying to setup a central logstash configuration. However I would like to be sending my logs through syslog-ng and not third party shippers. This means that my logstash server is accepting via syslog-ng all the logs from the agents. I then need to install a logstash process that will be reading from /var/log/syslog-clients/* and grabbing all the log files that are sent to the central log server. These logs will then be sent to redis on the same VM. In theory I need to also configure a second logstash process that will read from redis and start indexing the logs and send them to elasticsearch. My question: Do I have to use two different logstash processes (shipper & server) even if I am in the same box (I want one log server instance)? Is there any way to just have one logstash configuration and have the process read from syslog-ng --- write to redis and also read from redis --- output to elastic search ? Diagram of my setup: [client]-------syslog-ng--- [log server] ---syslog-ng <----logstash-shipper --- redis <----logstash-server ---- elastic-search <--- kibana

    Read the article

  • 10 Innovations in PeopleSoft 9.2 - #2 Lower TCO With The Peoplesoft Update Manager

    - by John Webb
    With the new PeopleSoft Update Manager in PeopleSoft 9.2 the way you manage updates to your PeopleSoft systems puts you in control of all changes on your schedule.   You can selectively apply patches with reduced time, effort, and cost.    Bundles and Maintenance Packs are no longer used.      Instead, a tailored custom package is automatically generated based on the parameters you select from the latest PeopleSoft source image.   You have access to all updates from Oracle on a cumulative basis and can select and search for specific updates such as new features, legal and regulatory changes, or a patch related to a specific issue, process or object.    Any prerequisites are automatically identified.  The  process of generating a change package is enabled through a new wizard with easy to follow steps and options.     As changes are introduced to your test environment the PeopleSoft Test Framework provides a closed loop process to run regression tests scripts against your changes.  For a quick overview of the PeopleSoft Update Manager check out the Video Feature Overview here: PeopleSoft Update Manager Video Feature Overview

    Read the article

  • Workaround: build FBX in XNA raise OutOfMemoryException

    - by Vitus
    If you try to add large FBX 3D model to the XNA project, and build it, you can get an OutOfMemoryException build error like following: Error    1    Building content threw OutOfMemoryException: Exception of type 'System.OutOfMemoryException' was thrown.    at System.Collections.Generic.List`1.set_Capacity(Int32 value)    at System.Collections.Generic.List`1.EnsureCapacity(Int32 min)    at System.Collections.Generic.List`1.InsertRange(Int32 index, IEnumerable`1 collection)    at Microsoft.Xna.Framework.Content.Pipeline.Graphics.VertexChannel`1.InsertRange(Int32 index, Int32 count)    at Microsoft.Xna.Framework.Content.Pipeline.Graphics.VertexContent.InsertRange(Int32 index, IEnumerable`1 positionIndexCollection)    at Microsoft.Xna.Framework.Content.Pipeline.Graphics.MeshBuilder.AddTriangleVertex(Int32 indexIntoVertexCollection)    at Microsoft.Xna.Framework.Content.Pipeline.MeshConverter.FillNodeWithInfoFromMesh(KFbxNode* fbxNode, String name, KFbxGeometryConverter* geometryConverter)    at Microsoft.Xna.Framework.Content.Pipeline.FbxImporter.ProcessInformationInNode(KFbxNode* fbxNode, String name, Boolean* partOfMainSkeleton, Boolean* warnIfBoneButNotChild)    at Microsoft.Xna.Framework.Content.Pipeline.FbxImporter.ProcessNode(ValueType parentAbsoluteTransform, NodeContent potentialParent, KFbxNode* fbxNode, Boolean partOfMainSkeleton, Boolean warnIfBoneButNotChild)    at Microsoft.Xna.Framework.Content.Pipeline.FbxImporter.ProcessNode(ValueType parentAbsoluteTransform, NodeContent potentialParent, KFbxNode* fbxNode, Boolean partOfMainSkeleton, Boolean warnIfBoneButNotChild)    at Microsoft.Xna.Framework.Content.Pipeline.FbxImporter.Import(String filename, ContentImporterContext context)    at Microsoft.Xna.Framework.Content.Pipeline.ContentImporter`1.Microsoft.Xna.Framework.Content.Pipeline.IContentImporter.Import(String filename, ContentImporterContext context)    //additional calls here …   My desktop PC have 8Gb RAM, and Visual Studio’s process devenv.exe use under 2Gb of it while build process (about 3.5-4Gb of RAM is always free). It’s obvious, that VS can’t address more than 2Gb of RAM, and when that limit is over, build process is fail. OS on my PC is Win x64,  so I “charge” devenv.exe by using editbin.exe utility – in the VS Command prompt I run following: editbin "C:\Program Files (x86)\Microsoft Visual Studio 10.0\Common7\IDE\devenv.exe" /LARGEADDRESSAWARE This command edits the image to indicate that the application can handle addresses larger than 2 gigabytes. After that FBX file successfully built! Of course, you must put proper path to devenv.exe, depend on your installation path. If you are on Win x86, you need to do additional action – more info here.   P.S.: although now you can build a bigger files, than usual, keep in mind, that XNA have some restrictions on vertex buffer size etc., depend on your current XNA project profile (Reach or HiDef). And if your model’s vertexbuffer size more than 64Mb (with Reach profile), that model can’t be built and raise an error.

    Read the article

  • Error while starting web application.

    - by Lalit
    0 When you right-click a Web site in the Microsoft Internet Information Services (IIS) Microsoft Management Console (MMC) snap-in, and then you click Start, the Web site does not start and you receive the following error message: The process cannot access the file because it is being used by another process. What have to do. To resolve this issue i got this solution form link http://support.microsoft.com/kb/890015 As: You must use the Netstat.exe utility at the command line to see if another process is using port 80 or port 443. But how to ensure that is these Ip are in use or not ? in terms of status ? What should its status ? Second solution is : HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\HTTP\Parameters\ListenOnlyList. But this key is not found .

    Read the article

  • 24HOP gets off to a good start

    - by Rob Farley
    Session 11 is on as I write this – Ami Levin presenting about Primary Keys. It’s a good session. But actually, they’ve all been excellent so far, not just Ami’s. I’ve heard only good things about the content. So if you’re reading this and 24HOP is still on, then tune in and take part. If it’s finished, get yourself over to http://sqlpass.org/24hours and see if the sessions have been made available on-demand. Yes – you should be able to watch the sessions when you want to for a year. Watching live is best, because you can ask questions and have them answered during the session, but if there are ones you just couldn’t make, then watching them on-demand is a good option. Numbers have been “not bad”. At the moment it’s still the middle of the night for most Americans – about 6:30am in New York, and yet we’ve had well over a hundred at all the sessions so far, getting up to well over 300 for some sessions. And when I look through the list of names, I see a bunch of names that suggest we’re reaching people from all around the world. I’m seriously looking forward to seeing the stats about which countries have been represented in the audiences. There have been a few comments about the platform. Everyone seems to consider IBTalk an improvement on LiveMeeting, but the closed captioning has met a mixed reception. Some people are loving it, whereas other people are finding the translations leave quite a bit of space for improvement. If you have feedback on this, please feel free to drop me an email (my name with an underscore at hotmail.com, or with a dot at sqlpass.org should reach me just fine, or Twitter, etc). I don’t know how many of the sessions I’ll get to watch overnight – but I’m looking forward to seeing how things go as the day progresses. Big thanks to everyone who’s involved – the sponsors, PASS HQ team and the IBTalk folk who have stayed up overnight to facilitate, plus the moderators, the people doing the live captioning, and of course the speakers and attendees. I love how the SQL Community gets behind things like this. Earlier, the Adelaide SQL Server User Group gathered and watched Denny Lee’s session on BigData, and everyone in the group agreed that it worked really well. I took a picture of our cinema room, although you could only see a small section of the audience. @rob_farley

    Read the article

  • WebDAV "PROPFIND" exception in IIS due to network share?

    - by jacko
    We're finding continuous exceptions in our event viewer on our live box to the following exception: [snippet] Process information: Process ID: 3916 Process name: w3wp.exe Account name: NT AUTHORITY\NETWORK SERVICE Exception information: Exception type: HttpException Exception message: Path 'PROPFIND' is forbidden. Thread information: Thread ID: 14 Thread account name: OURDOMAIN\Account Is impersonating: True Stack trace: at System.Web.HttpMethodNotAllowedHandler.ProcessRequest(HttpContext context) at System.Web.HttpApplication.CallHandlerExecutionStep.System.Web.HttpApplication.IExecutionStep.Execute() at System.Web.HttpApplication.ExecuteStep(IExecutionStep step, Boolean& completedSynchronously) Other Specs: Windows Server 2003 R2 & IIS 6.0 We've narrowed it down to occuring when people try to access shares on the box from within the network, and have discovered (we think) that its due to the WebDAV web services extension being previously disabled by past staff. The exceptions are being thrown when trying to access directories that are virtual dirs in IIS, and plain old UNC network shares What the implications for enabling the WebDAV extensions on our live web server? And will this solve our problems with the exceptions in our event log?

    Read the article

  • Controlling what data populates STAR

    - by user10747017
    Beginning with the Primavera Reporting Database 2.2\P6 Analytics 1.2 release, the first release that supported the P6 Extended Schema, a new ability was added to filter which projects could be included during an ETL run. In previous releases, all projects were included in an ETL run. Additionally, all projects with the option to enable publication are included in the ETL run by default.Because the reporting needs for P6 Extended Schema are different from those of STAR, you can define a filter that will limit the data that is included in the STAR schema. For example, your STAR schema can be filter to only include all projects in a specific Portfolio, or all projects with a project code assignment of 'For Analytics.'  Any criteria that can be defined in a Where clause and added to a view can be used to filter the projects included in the STAR schema. I highly suggest this approach when dealing with large databases. Unnecessary projects could cause the Extract portion of the ETL process to take longer. A table in STAR called etl_projectlist is the key for what projects are targeted during the ETL process. To setup the filter, perform the following steps:1. Connect to your Primavera P6 Project Management Database as Pxrptuser (extended schema owner) and create a new view:create or replace view star_project_viewasselect PROJECTOBJECTID objectidfrom projectportfolio pp, projectprojectportfolio pppwhere pp.objectid = ppp.PROJECTPORTFOLIOOBJECTIDand pp.name = 'STAR Projects'--The main field that MUST be selected in the view is the projectobjectid. Selecting any other field besides the projectobjectid will cause the view to be invalid and will not work. Any Where clause can be used, but projectobjectid is the key.2. In your STAR installation directory go the \res folder and edit the staretl.properties file.  Here you will define the view to be used.  Add the following line or update if exists:star.project.filter.ds1=star_project_view3. When running the  staretl.cmd or staretl.sh process the database link to Pxrtpuser will be accessed and this view will be used to populate the etl_projectlist table  with the appropriate projectobjectids as defined in the view created in step 1 above.

    Read the article

  • Windows Malicious Software Removal Tool log says it can't do all required actions. Should I be conce

    - by Tom
    Here's what the log file c:/Windows/debug/mrt.log of my Windows 7 install says: WARNING: Security policy doesn't allow for all actions MSRT may require. ->Scan ERROR: resource process://pid:6080 (code 0x00000005 (5)) ->Scan ERROR: resource process://pid:5300 (code 0x00000057 (87)) ->Scan ERROR: resource process://pid:3512 (code 0x00000057 (87)) I use the default setup. I didn't change anything. This is the first time I checked the log file and this warning is in there from the start. Can I do something about it? Or I shouldn't be concerned, because it can do everything what's necessary anyway? Do you have this warning in your logfile?

    Read the article

  • External HDD is always in use when trying to safely remove

    - by Mario De Schaepmeester
    I have a WD 1TB Elements external hard drive and every time I use the Windows 7 "safely remove" feature, it gives me a dialog telling that a process is using the disk. Using Sysinternals Process Explorer and the answer on this question (find everything with the drive letter) I get the following result: What is the $Extend folder and why is it in use? How can I disable it? I cannot remove it using the command line (access denied). Edit: I've followed the instructions over here and under the registry key HKLM\SYSTEM\CurrentControlSet\Control\BackupRestore\FilesNotToBackup I have a Multi-String Value named IgnoreNTFS with data \$Extend* /s But this does not make any difference. Also this question is not about a server. Additionally I can tell that I use a program called mkv2vob to convert video files with a Matroska container into something my PS3 will play. I convert the source files straight from my external HDD, but I would expect if this program does not release the lock on the HDD, surely it cannot be locked if the process isn't even running?

    Read the article

  • Apache2 memory usage when uploading large files

    - by abhaga
    Hi, I am running apache2.2.12 along with PHP 5.2.10. PHP is configured to run as a separate process through fcgid. The problem is that when users upload a file, size of the apache process swells by almost the same amount. So if somebody tries to upload a 200 MB file, one of the child process swells to current size+200 MB. If 2 users simultaneously start uploading, my server crashes. Now it is the virtual memory size which is increasing but since I am on a OpenVZ based VPS, that is what counts. My questions are: Is it the normal Apache behavior or can I do something to fix this? If not, is there a more memory efficient way of handling big file uploads. Going by the current behavior, I will need 1 GB of free RAM for every apache child accepting a upload. Thanks! Abhaya -

    Read the article

  • Dynamic endpoint binding in Oracle SOA Suite by Cattle Crew

    - by JuergenKress
    Why is dynamic endpoint binding needed? Sometimes a BPEL process instance has to determine at run-time which implementation of a web service interface is to be called. We’ll show you how to achieve that using dynamic endpoint binding. Let’s imagine the following scenario: we’re running a car rental agency called RYLC (Rent Your Legacy Car) which operates different locations. The process of renting a car is basically identical for all locations except for the determination which cars are currently available. This is depicted in the following diagram: There are three different implementations of the GetAvailableCars service. But how can we achieve calling them dynamically at run-time using Oracle SOA Suite? How to dynamically set the service endpoint There are just a couple of implementation steps we need to perform to enable dynamic endpoint binding: create a new SOA project in JDeveloper add a CarRental BPEL process add an external reference to the GetAvailableCars service within the composite create a DVM file containing the URI’s by which the services for the different locations can be accessed set the endpointURI property on the Invoke component calling the GetAvailableCars service (value is taken from the DVM file) Read the complete article here. SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Facebook Wiki Technorati Tags: Cattle crew,SOA binding,SOA Community,Oracle SOA,Oracle BPM,Community,OPN,Jürgen Kress

    Read the article

  • Why does XCode convert PNGs to CgBI format?

    - by Gdeglin
    According to the research done here http://imageoptim.com/tweetbot.html, Xcode's conversion of PNGs to the proprietary Apple CgBI format does not create a noticeable performance improvement. Their claim is that the conversion only reduces PNG loading speed by 1 nanosecond. If this is true, why does apple bother with the CgBI format at all? Has anyone else benchmarked loading CgBI images vs regular PNG images on iOS devices to see if they perform differently?

    Read the article

< Previous Page | 150 151 152 153 154 155 156 157 158 159 160 161  | Next Page >