Search Results

Search found 10023 results on 401 pages for 'manage processes'.

Page 107/401 | < Previous Page | 103 104 105 106 107 108 109 110 111 112 113 114  | Next Page >

  • PHP-FPM stops responding and dies [migrated]

    - by user12361
    I'm running Drupal 6 with Nginx 1.5.1 and PHP-FPM (PHP 5.3.26) on a 1GB single core VPS with 3GB of swap space on SSD storage. I just switched from shared hosting to this unmanaged VPS because my site was getting too heavy, so I'm still learning the ropes. I have moderately high traffic, I don't really monitor it closely but Google Adsense usually record close to 30K page views/day. I usually have 50 to 80 authenticated users logged in and a few hundred more anonymous users hitting the Boost static HTML cache at any given moment. The problem I'm having is that PHP-FPM frequently stops responding, resulting in Nginx 502 or 504 errors. I swear I have read every page on the internet about this issue, which seems fairly common, and I've tried endless combinations of configurations, and I can't find a good solution. After restarting Nginx and PHP-FPM, the site runs really fast for a while, and then without warning it simply stops responding. I get a white screen while the browser waits on the server, and after about 30 seconds to a minute it throws an Nginx 502 or 504 error. Sometimes it runs well for 2 minutes, sometimes 5 minutes, sometimes 5 hours, but it always ends up hanging. When I find the server in this state, there is still plenty of free memory (500MB or more) and no major CPU usage, the control and worker PHP-FPM processes are still present, and the server is still pingable and usable via SSH. A reload of PHP-FPM via the init script revives it again. The hangups don't seem to correspond to the amount of traffic, because I observed this behavior consistently when I was testing this configuration on a development VPS with no traffic at all. I've been constantly tweaking the settings, but I can't definitively eliminate the problem. I set Nginx workers to just 1. In the PHP-FPM config I have tried all three of the process managers. "Dynamic" is definitely the least reliable, consistently hanging up after only a few minutes. "Static" also has been unreliable and unpredictable. The least buggy has been "ondemand", but even that is failing me, sometimes after as much as 12 to 24 hours. But I can't leave the server unattended because PHP-FPM dies and never comes back on its own. I tried adjusting the pm.max_children value from as low as 3 to as high as 50, doesn't make a lot of difference, but I currently have it at 10. Same thing for the spare servers values. I also have set pm.max_requests anywhere from 30 to unlimited, and it doesn't seem to make a difference. According to the logs, the PHP-FPM processes are not exiting with SIGSEGV or SIGBUS, but rather with SIGTERM. I get a lot of lines like: WARNING: [pool www] child 3739, script '/var/www/drupal6/index.php' (request: "GET /index.php") execution timed out (38.739494 sec), terminating and: WARNING: [pool www] child 3738 exited on signal 15 (SIGTERM) after 50.004380 seconds from start I actually found several articles that recommend doing a graceful reload of PHP-FPM via cron every few minutes or hours to circumvent this issue. So that's what I did, "/etc/init.d/php-fpm reload" every 5 minutes. So far, it's keeping the lights on. But it feels like a dreadful hack. Is PHP-FPM really that unreliable? Is there anything else I can do? Thanks a lot!

    Read the article

  • The Minimalist Approach to Content Governance - Request Phase

    - by Kellsey Ruppel
    Originally posted by John Brunswick. For each project, regardless of size, it is critical to understand the required ownership, business purpose, prerequisite education / resources needed to execute and success criteria around it. Without doing this, there is no way to get a handle on the content life-cyle, resulting in a mass of orphaned material. This lowers the quality of end user experiences.     The good news is that by using a simple process in this request phase - we will not have to revisit this phase unless something drastic changes in the project. For each of the elements mentioned above in this stage, the why, how (technically focused) and impact are outlined with the intent of providing the most value to a small team. 1. Ownership Why - Without ownership information it will not be possible to track and manage any of the content and take advantage of many features of enterprise content management technology. To hedge against this, we need to ensure that both a individual and their group or department within the organization are associated with the content. How - Apply metadata that indicates the owner and department or group that has responsibility for the content. Impact - It is possible to keep the content system optimized by running native reports against the meta-data and acting on them based on what has been outlined for success criteria. This will maximize end user experience, as content will be faster to locate and more relevant to the user by virtue of working through a smaller collection. 2. Business Purpose Why - This simple step will weed out requests that have tepid justification, as users will most likely not spend the effort to request resources if they do not have a real need. How - Use a simple online form to collect and workflow the request to management native to the content system. Impact - Minimizes the amount user generated content that is of low value to the organization. 3. Prerequisite Education Resources Needed Why - If a project cannot be properly staffed the probability of its success is going to be low. By outlining the resources needed - in both skill set and duration - it will cause the requesting party to think critically about the commitment needed to complete their project and what gap must be closed with regard to education of those resources. How - In the simple request form outlined above, resources and a commitment to fulfilling any needed education should be included with a brief acceptance clause that outlines the requesting party's commitment. Impact - This stage acts as a formal commitment to ensuring that resources are able to execute on the vision for the project. 4. Success Criteria Why - Similar to the business purpose, this is a key element in helping to determine if the project and its respective content should continue to exist if it does not meet its intended goal. How - Set a review point for the project content that will check the progress against the originally outlined success criteria and then determine the fate of the content. This can even include logic that will tell the content system to remove items that have not been opened by any users in X amount of time. Impact - This ensures that projects and their contents do not live past their useful lifespans. Just as with orphaned content, non-relevant information will slow user's access to relevant materials for the jobs. Request Phase Summary With a simple form that outlines the ownership of a project and its content, business purpose, education and resources, along with success criteria, we can ensure that an enterprise content management system will stay clean and relevant to end users - allowing it to deliver the most value possible. The key here is to make it straightforward to make the request and let the content management technology manage as much as possible through metadata, retention policies and workflow. Doing these basic steps will allow project content to get off to a great start in the enterprise! Stay tuned for the next installment - the "Create Phase" - covering security access and workflow involved in content creation, enabling a practical layer of governance over our enterprise content repository.

    Read the article

  • How do you deal with poor management [closed]

    - by Sybiam
    I come from a company where during a project, we saw the client 3 time during the whole project. We were never informed when did the client came in office in order to discuss with him about his requirements. I did setup redmine and told them that if they have any request they can post an issue there. But they never really used redmine to publish anything. They would instead: harass a team member on the phone at any time of the day or night hand us over sheets of paper with new requests or changes hand us over new design (graphical) They requested how much time it would take us to finish the project, I gave them a date and a week to test everything and deployment. I calculated that time taking into account the current features we had to do. And then blamed us that our deadline was wrong and that we lied. But the truth is that one week before that deadline they added a couple of monster feature from nowhere and that week were we were supposed to test and deploy, my friends spent all day in the office changing all little things. After that project, my friend got some kind of depression and got scared everytime his phone rang. They kind of used him as a communication proxy. After that project of hell, (every body got pissed off on that project) as far as I know the designer who was working with us left work after that project and she had some kind of issue too with managers. My team also started looking for work somewhere else. At first I tried to get things straight with management, I tried to make a meeting to discuss about the communication issues and so on.. What really pissed me off and made me leave that job for good is the following. Me: "We have to discuss about what went wrong on the last project. It's quite important" Him: "Lets talk about it in a week or two. Just make a list of all the things you did wrong" Me: "We already have a new project and we want to prevent what happened on the last project to happen again" Him: "Just do it and well have our meeting in a week, make a list of all the thing you did wrong." It kind of ended there then he organized a meeting at a moment I wasn't unable to come. My friend discussed with him and tried to explained him that we really had to discuss about organization issue on how to manage a project. And his answer was pretty much: "During the meeting I don't want to ear how you want to us to manage a project but I want to know what you guys did wrong" After that I felt it wasn't even worth it discussing anything since they weren't even ready listening to us. Found a new job and I'm pretty happy with my choice. I'd like to know how you'd handle such situation. Is there anything to do to solve communication problem? After that project my friend got a depression and some other employee had their down too as far as I know. I wonder what else we can do other than leave these place as soon as possible. Feel sad for the people that are still there and get screamed at just because they need money in order to eat and finding an other job like that isn't that easy. note I died a little when our boss asked us to make a list of things we (programmers) did wrong. This is probably the stupidest request I ever got. If everybody thinks they did everything right, it doesn't mean that there is no problems. Individual problem are rarely the big issue. Colleagues help each others and solve theses issues to prevent problems.

    Read the article

  • Oracle Applications Cloud Release 8 Customization: Your User Interface, Your Text

    - by ultan o'broin
    Introducing the User Interface Text Editor In Oracle Applications Cloud Release 8, there’s an addition to the customization tool set, called the User Interface Text Editor  (UITE). When signed in with an application administrator role, users launch this new editing feature from the Navigator's Tools > Customization > User Interface Text menu option. See how the editor is in there with other customization tools? User Interface Text Editor is launched from the Navigator Customization menu Applications customers need a way to make changes to the text that appears in the UI, without having to initiate an IT project. Business users can now easily change labels on fields, for example. Using a composer and activated sandbox, these users can take advantage of the Oracle Metadata Services (MDS), add a key to a text resource bundle, and then type in their preferred label and its description (as a best practice for further work, I’d recommend always completing that description). Changing a simplified UI field label using Oracle Composer In Release 8, the UITE enables business users to easily change UI text on a much wider basis. As with composers, the UITE requires an activated sandbox where users can make their changes safely, before committing them for others to see. The UITE is used for editing UI text that comes from Oracle ADF resource bundles or from the Message Dictionary (or FND_MESSAGE_% tables, if you’re old enough to remember such things). Functionally, the Message Dictionary is used for the text that appears in business rule-type error, warning or information messages, or as a text source when ADF resource bundles cannot be used. In the UITE, these Message Dictionary texts are referred to as Multi-part Validation Messages.   If the text comes from ADF resource bundles, then it’s categorized as User Interface Text in the UITE. This category refers to the text that appears in embedded help in the UI or in simple error, warning, confirmation, or information messages. The embedded help types used in the application are explained in an Oracle Fusion Applications User Experience (UX) design pattern set. The message types have a UX design pattern set too. Using UITE  The UITE enables users to search and replace text in UI strings using case sensitive options, as well as by type. Users select singular and plural options for text changes, should they apply. Searching and replacing text in the UITE The UITE also provides users with a way to preview and manage changes on an exclusion basis, before committing to the final result. There might, for example, be situations where a phrase or word needs to remain different from how it’s generally used in the application, depending on the context. Previewing replacement text changes. Changes can be excluded where required. Multi-Part Messages The Message Dictionary table architecture has been inherited from Oracle E-Business Suite days. However, there are important differences in the Oracle Applications Cloud version, notably the additional message text components, as explained in the UX Design Patterns. Message Dictionary text has a broad range of uses as indicated, and it can also be reserved for internal application use, for use by PL/SQL and C programs, and so on. Message Dictionary text may even concatenate together at run time, where required. The UITE handles the flexibility of such text architecture by enabling users to drill down on each message and see how it’s constructed in total. That way, users can ensure that any text changes being made are consistent throughout the different message parts. Multi-part (Message Dictionary) message components in the UITE Message Dictionary messages may also use supportability-related numbers, the ones that appear appended to the message text in the application’s UI. However, should you have the requirement to remove these numbers from users' view, the UITE is not the tool for the job. Instead, see my blog about using the Manage Messages UI.

    Read the article

  • Editing files without race conditions?

    - by user2569445
    I have a CSV file that needs to be edited by multiple processes at the same time. My question is, how can I do this without introducing race conditions? It's easy to write to the end of the file without race conditions by open(2)ing it in "a" (O_APPEND) mode and simply write to it. Things get more difficult when removing lines from the file. The easiest solution is to read the file into memory, make changes to it, and overwrite it back to the file. If another process writes to it after it is in memory, however, that new data will be lost upon overwriting. To further complicate matters, my platform does not support POSIX record locks, checking for file existence is a race condition waiting to happen, rename(2) replaces the destination file if it exists instead of failing, and editing files in-place leaves empty bytes in it unless the remaining bytes are shifted towards the beginning of the file. My idea for removing a line is this (in pseudocode): filename = "/home/user/somefile"; file = open(filename, "r"); tmp = open(filename+".tmp", "ax") || die("could not create tmp file"); //"a" is O_APPEND, "x" is O_EXCL|O_CREAT while(write(tmp, read(file)); //copy the $file to $file+".new" close(file); //edit tmp file unlink(filename) || die("could not unlink file"); file = open(filename, "wx") || die("another process must have written to the file after we copied it."); //"w" is overwrite, "x" is force file creation while(write(file, read(tmp))); //copy ".tmp" back to the original file unlink(filename+".tmp") || die("could not unlink tmp file"); Or would I be better off with a simple lock file? Appender process: lock = open(filename+".lock", "wx") || die("could not lock file"); file = open(filename, "a"); write(file, "stuff"); close(file); close(lock); unlink(filename+".lock"); Editor process: lock = open(filename+".lock", "wx") || die("could not lock file"); file = open(filename, "rw"); while(contents += read(file)); //edit "contents" write(file, contents); close(file); close(lock); unlink(filename+".lock"); Both of these rely on an additional file that will be left over if a process terminates before unlinking it, causing other processes to refuse to write to the original file. In my opinion, these problems are brought on by the fact that the OS allows multiple writable file descriptors to be opened on the same file at the same time, instead of failing if a writable file descriptor is already open. It seems that O_CREAT|O_EXCL is the closest thing to a real solution for preventing filesystem race conditions, aside from POSIX record locks. Another possible solution is to separate the file into multiple files and directories, so that more granular control can be gained over components (lines, fields) of the file using O_CREAT|O_EXCL. For example, "file/$id/$field" would contain the value of column $field of the line $id. It wouldn't be a CSV file anymore, but it might just work. Yes, I know I should be using a database for this as databases are built to handle these types of problems, but the program is relatively simple and I was hoping to avoid the overhead. So, would any of these patterns work? Is there a better way? Any insight into these kinds of problems would be appreciated.

    Read the article

  • When should I use a Process Model versus a Use Case?

    - by Dave Burke
    This Blog entry is a follow on to https://blogs.oracle.com/oum/entry/oum_is_business_process_and and addresses a question I sometimes get asked…..i.e. “when I am gathering requirements on a Project, should I use a Process Modeling approach, or should I use a Use Case approach?” Not surprisingly, the short answer is “it depends”! Let’s take a scenario where you are working on a Sales Force Automation project. We’ll call the process that is being implemented “Lead-to-Order”. I would typically think of this type of project as being “Process Centric”. In other words, the focus will be on orchestrating a series of human and system related tasks that ultimately deliver value to the business in a cost effective way. Put in even simpler terms……implement an automated pre-sales system. For this type of (Process Centric) project, requirements would typically be gathered through a series of Workshops where the focal point will be on creating, or confirming, the Future-State (To-Be) business process. If pre-defined “best-practice” business process models exist, then of course they could and should be used during the Workshops, but even in their absence, the focus of the Workshops will be to define the optimum series of Tasks, their connections, sequence, and dependencies that will ultimately reflect a business process that meets the needs of the business. Now let’s take another scenario. Assume you are working on a Content Management project that involves automating the creation and management of content for User Manuals, Web Sites, Social Media publications etc. Would you call this type of project “Process Centric”?.......well you could, but it might also fall into the category of complex configuration, plus some custom extensions to a standard software application (COTS). For this type of project it would certainly be worth considering using a Use Case approach in order to 1) understand the requirements, and 2) to capture the functional requirements of the custom extensions. At this point you might be asking “why couldn’t I use a Process Modeling approach for my Content Management project?” Well, of course you could, but you just need to think about which approach is the most effective. Start by analyzing the types of Tasks that will eventually be automated by the system, for example: Best Suited To? Task Name Process Model Use Case Notes Manage outbound calls Ö A series of linked human and system tasks for calling and following up with prospects Manage content revision Ö Updating the content on a website Update User Preferences Ö Updating a users display preferences Assign Lead Ö Reviewing a lead, then assigning it to a sales person Convert Lead to Quote Ö Updating the status of a lead, and then converting it to a sales order As you can see, it’s not an exact science, and either approach is viable for the Tasks listed above. However, where you have a series of interconnected Tasks or Activities, than when combined, deliver value to the business, then that would be a good indicator to lead with a Process Modeling approach. On the other hand, when the Tasks or Activities in question are more isolated and/or do not cross traditional departmental boundaries, then a Use Case approach might be worth considering. Now let’s take one final scenario….. As you captured the To-Be Process flows for the Sales Force automation project, you discover a “Gap” in terms of what the client requires, and what the standard COTS application can provide. Let’s assume that the only way forward is to develop a Custom Extension. This would now be a perfect opportunity to document the functional requirements (behind the Gap) using a Use Case approach. After all, we will be developing some new software, and one of the most effective ways to begin the Software Development Lifecycle is to follow a Use Case approach. As always, your comments are most welcome.

    Read the article

  • Hosted Monitoring

    - by Grant Fritchey
    The concept of using services to take the place of writing a lot of your own code goes way, way back in computing history. The fundamentals of the concept go back to the dawn of computing with places like IBM hosting time-shares for computing power that you could rent for short periods of time. But things really took off with the building of the Web. Now, all the growth with virtual machines, hosted machines, hosted services from vendors like Amazon and Microsoft, the need to keep all of your software locally on physical boxes is just going the way of the dodo. There will likely always be some pieces of software that you keep on machines on your property or on your person, but the concept of keeping fundamental services locally is going away. As someone put it to me once, if you were starting a business right now, would you bother setting up an Exchange server to manage your email or would you just go to one of the external mail services for everything? For most of us (who are not Exchange admins) the answer is pretty easy. With all this momentum to having external services manage more and more of the infrastructure that’s not business unique, why would you burn up a server and license instance setting up monitoring for your SQL Servers? Of course, some of you are dealing with hyper-sensitive data that might require, through law or treaty, that you lock it down and never expose it to the intertubes, but most of us are not. So, what if someone else took on the basic hassle of setting up monitoring on your systems? That’s what we’re working on here at Red Gate. Right now it’s a private test, but we’re growing it and developing it and it’ll be going to a public beta, probably (hopefully) this year. I’m running it on my machines right now. The concept is pretty simple. You put a relay on your server, poke a hole in your firewall for it, and we start monitoring your server using SQL Monitor. It’s actually shocking how easy it is to get going. You still have to adjust your alerting thresholds, but that’s a standard part of alerting. Your pain threshold and my pain threshold for any given alert may be different. But from there, we do all the heavy lifting, keeping your data online and available, providing you with access to the information about how your servers are behaving, everything. Maybe it’s just me, but I’m really excited by this. I think we’re getting to a place where we can really help the small and medium sized businesses get a monitoring solution in place, quickly and easily. All you crazy busy, and possibly accidental, DBAs and system admins finally can set up monitoring without taking all the time to configure systems, run installs, and all the rest. You just have to tweak your alerts and you’re ready to run. If you are interested in checking it out, you can apply for the closed beta through the Monitor web page.

    Read the article

  • Development process for an embedded project with significant hardware changes

    - by pierr
    I have a good idea about Agile development process but it seems it does not fit well with a embedded project with significant hardware changes. I will describe below what we are currently doing (Ad-hoc way, no defined process yet). The changes are divided into three categories and different processes are used for each of them: complete hardware change example : use a different video codec IP a) Study the new IP b) RTL/FPGA simulation c) Implement the legacy interface - go to b) d) Wait until hardware (tape out) is ready f) Test on the real hardware hardware improvement example : enhance the image display quality by improving the underlying algorithm a) RTL/FPGA simulation b) Wait until hardware and test on the hardware Minor change example : only change hardware register mapping a) Wait until hardware and test on the hardware The worry is it seems we don't have too much control and confidence about software maturity for the hardware changes as the bring-up schedule is always very tight and the customer desired a seamless change when updating to a new version of hardware. How did you manage this kind of hardware change? Did you solve that by a Hardware Abstraction Layer (HAL)? Did you have a automatic test for the HAL layer? How did you test when the hardware platform is not even ready? Do you have well-documented processes for this kind of change?

    Read the article

  • Session State with MVP and Application Controller patterns

    - by Graham Bunce
    Hi, I've created an MVP (passive view) framework for development and decided to go for an Application Controller pattern to manage the navigation between views. This is targeted at WinForms, ASP.NET and WPF interfaces. Although I'm not 100% convinced that these view technologies really swappable, that's my aim at the moment so my MVP framework is quite lightweight. What I'm struggling to fit in is the concept of a "Business Conversation" that needs state information to be either (a) maintained for the lifetime of the View or, more likely, (b) maintained across several views for the lifetime of a use case (business conversation). I want state management to be part of the framework as I don't want developers to worry about it. All they need to do is to "start" a conversation, "Register" objects and the framework does the rest until the "end" a conversation. Has anybody got any thoughts (patterns) to how to fit this into MVP? I was thinking it may be part of the Application Controller responsibility (delegating to a Conversation Manager object) as it knows about current state in order to send the user to the next view.... but then I thought it may be up to the Presenter to start and end the conversation so then it comes down the presenters to manage conversations and the objects registered for the that conversation. Unfortunately that means presenters can't be used in different conversations... so that idea doesn't seem right. As you can see, I don't think there is an easy answer (and I've looked for a while). So anybody else got any thoughts?

    Read the article

  • Need advice or pointers on Release Management Strategies

    - by Murray
    I look after an internal web based (Java, JSP, Mediasurface, etc.) system that is in constant use (24/5). Users raise tickets for enhancements, bug fixes and other business changes. These issues are signed off individually and assigned to one of three or four developers. Once the issue is complete it is built and the code only committed to SVN. The changed files (templates, html, classes, jsp) are then copied to a dev server and committed to a different repository from where they are checked out to the UAT server for testing. (this often requires the Tomcat service to be restarted and occasionally the Mediasurface service as well). The users then test and either reject or approve the release. If approved the edited files are checked out to the Live server and the same process as with UAT undertaken. If rejected the developer makes the relevant changes and starts the release process again. This is all done manually without much control. Where different developers are working on similar files, changes sometimes get overwritten by builds done on out of sync code in other cases changes in UAT are moved to live in error as they are mixed up in files associated with a signed off release. I would like to move this to a more controlled and automated process where all source code and output files are held in SVN and releases to Dev, UAT and Live managed by a CI system (We have TeamCity in house for our .NET applications). My question is on how to manage the releases of multiple changes where some will be signed off and moved on and others rejected and returned to the developer. The changes may be on overlapping files and simply merging each release in to a Release Branch means that the rejected changes would have to be backed out of the branch. Is there a way to manage this using SVN and CI or will I simply have to live with the current system.

    Read the article

  • Django one form for two models

    - by martinthenext
    Hello! I have a ForeignKey relationship between TextPage and Paragraph and my goal is to make front-end TextPage creating/editing form as if it was in ModelAdmin with 'inlines': several field for the TextPage and then a couple of Paragraph instances stacked inline. The problem is that i have no idea about how to validate and save that: @login_required def textpage_add(request): profile = request.user.profile_set.all()[0] if not (profile.is_admin() or profile.is_editor()): raise Http404 PageFormSet = inlineformset_factory(TextPage, Paragraph, extra=5) if request.POST: try: textpageform = TextPageForm(request.POST) # formset = PageFormSet(request.POST) except forms.ValidationError as error: textpageform = TextPageForm() formset = PageFormSet() return render_to_response('textpages/manage.html', { 'formset' : formset, 'textpageform' : textpageform, 'error' : str(error), }, context_instance=RequestContext(request)) # Saving data if textpageform.is_valid() and formset.is_valid(): textpageform.save() formset.save() return HttpResponseRedirect(reverse(consults)) else: textpageform = TextPageForm() formset = PageFormSet() return render_to_response('textpages/manage.html', { 'formset' : formset, 'textpageform' : textpageform, }, context_instance=RequestContext(request)) I know I't a kind of code-monkey style to post code that you don't even expect to work but I wanted to show what I'm trying to accomplish. Here is the relevant part of models.py: class TextPage(models.Model): title = models.CharField(max_length=100) page_sub_category = models.ForeignKey(PageSubCategory, blank=True, null=True) def __unicode__(self): return self.title class Paragraph(models.Model): article = models.ForeignKey(TextPage) title = models.CharField(max_length=100, blank=True, null=True) text = models.TextField(blank=True, null=True) def __unicode__(self): return self.title Any help would be appreciated. Thanks!

    Read the article

  • PyDev and Django: PyDev breaking Django shell?

    - by Rosarch
    I've set up a new project, and populated it with simple models. (Essentially I'm following the tut.) When I run python manage.py shell on the command line, it works fine: >python manage.py shell Python 2.6.4 (r264:75708, Oct 26 2009, 08:23:19) [MSC v.1500 32 bit (Intel)] on win32 Type "help", "copyright", "credits" or "license" for more information. (InteractiveConsole) >>> from mysite.myapp.models import School >>> School.objects.all() [] Works great. Then, I try to do the same thing in Eclipse (using a Django project that is composed of the same files.) Right click on mysite project Django Shell with Django environment This is the output from the PyDev Console: >>> import sys; print('%s %s' % (sys.executable or sys.platform, sys.version)) C:\Python26\python.exe 2.6.4 (r264:75708, Oct 26 2009, 08:23:19) [MSC v.1500 32 bit (Intel)] >>> >>> from django.core import management;import mysite.settings as settings;management.setup_environ(settings) 'path\\to\\mysite' >>> from mysite.myapp.models import School >>> School.objects.all() Traceback (most recent call last): File "<console>", line 1, in <module> File "C:\Python26\lib\site-packages\django\db\models\query.py", line 68, in __repr__ data = list(self[:REPR_OUTPUT_SIZE + 1]) File "C:\Python26\lib\site-packages\django\db\models\query.py", line 83, in __len__ self._result_cache.extend(list(self._iter)) File "C:\Python26\lib\site-packages\django\db\models\query.py", line 238, in iterator for row in self.query.results_iter(): File "C:\Python26\lib\site-packages\django\db\models\sql\query.py", line 287, in results_iter for rows in self.execute_sql(MULTI): File "C:\Python26\lib\site-packages\django\db\models\sql\query.py", line 2368, in execute_sql cursor = self.connection.cursor() File "C:\Python26\lib\site-packages\django\db\backends\__init__.py", line 81, in cursor cursor = self._cursor() File "C:\Python26\lib\site-packages\django\db\backends\sqlite3\base.py", line 170, in _cursor self.connection = Database.connect(**kwargs) OperationalError: unable to open database file What am I doing wrong here?

    Read the article

  • Rules engine for spatial and temporal reasoning?

    - by John
    I have an application that receives a number of datums that characterize spatial / temporal processes. It then filters these datums and creates actions which are then sent to processes that perform the actions. Rinse and repeat. At present, I have a collection of custom filters that perform a lot of complicated spatial/temporal calculations. Many times as I discuss my system to individuals in my company, they ask if I'm using a rules engine. I have yet to find a rules engine that is able to reason well temporally and spatially. (Things like When are two entities ever close? Is entity A ever in region B? If entity C is near entity D but oriented backwards relative to C then perform action D.) I have looked at Drools, Cyc, Jess in the past (say 3-4 years ago). It's time to re-examine the state of the art. Any suggestions? Any standards that you know of that support this kind of reasoning? Any defacto standards? Any applications? Thanks!

    Read the article

  • Python - Help with multiprocessing / threading basics.

    - by orokusaki
    I haven't ever used multi-threading, and I decided to learn it today. I was reluctant to ever use it before, but when I tried it out it seemed way to easy, which makes me wary. Are there any gotchas in my code, or is it really that simple? import uuid import time import multiprocessing def sleep_then_write(content): time.sleep(5) f = open(unicode(uuid.uuid4()), 'w') f.write(content) f.close() if __name__ == '__main__': for i in range(3): p = multiprocessing.Process(target=sleep_then_write, args=('Hello World',)) p.start() My primary purpose of using threading would be to offload multiple images to S3 after re-sizing them, all at the same time. Is that a reasonable task for Python's multiprocessing? I've read a lot about certain types of tasks not really getting any gain from using threading in Python due to the GIL, but it seems that multiprocessing completely removes that worry, yes? I can imagine a case where 50 users hit the system and it spawns 150 Python interpreters. I can also imagine that wouldn't be good on a production server. How can something like that be avoided? Finally (but most important): How can I return control back to the caller of the new processes? I need to be able to continue with returning an HTTP response and content back to the user and then have the processes continue doing there work after the user of my website is done with the transaction.

    Read the article

  • Django Error - AttributeError: 'Settings' object has no attribute 'LOCALE_PATHS'

    - by Randy Simon
    I am trying to learn django by following along with this tutorial. I am using django version 1.1.1 I run django-admin.py startproject mysite and it creates the files it should. Then I try to start the server by running python manage.py runserver but here is where I get the following error. Traceback (most recent call last): File "manage.py", line 11, in <module> execute_manager(settings) File "/Library/Python/2.6/site-packages/django/core/management/__init__.py", line 362, in execute_manager utility.execute() File "/Library/Python/2.6/site-packages/django/core/management/__init__.py", line 303, in execute self.fetch_command(subcommand).run_from_argv(self.argv) File "/Library/Python/2.6/site-packages/django/core/management/base.py", line 195, in run_from_argv self.execute(*args, **options.__dict__) File "/Library/Python/2.6/site-packages/django/core/management/base.py", line 213, in execute translation.activate('en-us') File "/Library/Python/2.6/site-packages/django/utils/translation/__init__.py", line 73, in activate return real_activate(language) File "/Library/Python/2.6/site-packages/django/utils/translation/__init__.py", line 43, in delayed_loader return g['real_%s' % caller](*args, **kwargs) File "/Library/Python/2.6/site-packages/django/utils/translation/trans_real.py", line 205, in activate _active[currentThread()] = translation(language) File "/Library/Python/2.6/site-packages/django/utils/translation/trans_real.py", line 194, in translation default_translation = _fetch(settings.LANGUAGE_CODE) File "/Library/Python/2.6/site-packages/django/utils/translation/trans_real.py", line 172, in _fetch for localepath in settings.LOCALE_PATHS: File "/Library/Python/2.6/site-packages/django/utils/functional.py", line 273, in __getattr__ return getattr(self._wrapped, name) AttributeError: 'Settings' object has no attribute 'LOCALE_PATHS' Now, I can add a LOCAL_PATH atribute set to an empty string to my settings.py file but then it just complains about another setting and so on. What am I missing here?

    Read the article

  • No coverage for runtime with Devel::Cover and ModPerl::Registry

    - by codeholic
    When I'm running Devel::Cover with ModPerl::Registry, I get no coverage info except for BEGIN blocks. When I'm running the same script with Devel::Cover from command line or as a CGI, everything works alright (obviously). How can I make Devel::Cover "see" my code being executed in the runtime? Here's Devel::Cover related stuff in my httpd.conf: MaxClients 1 PerlSetEnv DEVEL_COVER_OPTIONS -db,/tmp/cover_db,silent,1 PerlRequire /var/www/project/startup.pl Here's startup.pl: #!/usr/bin/perl use strict; use warnings; use Apache2::Directive (); use File::Basename (); use File::Find (); BEGIN { # Devel::Cover database must be writable by worker processes my $conftree = Apache2::Directive::conftree->as_hash; my $name = $conftree->{User} or die "couldn't find user in Apache config"; print "user=$name\n"; my $uid = getpwnam($name); defined $uid or die "couldn't determine uid by name"; no warnings 'redefine'; local $> = $uid; require Devel::Cover; my $old_report = \&Devel::Cover::report; *Devel::Cover::report = sub { local $> = $uid; $old_report->(@_) }; Devel::Cover->import; } 1; (As you may see, I made a monkey patch for Devel::Cover since startup.pl is being run by root, but worker processes run under a different user, and otherwise they couldn't read directories created by startup.pl. If you know a better solution, make a note, please.)

    Read the article

  • Visual Studio Debugging is not attaching to WebDev.WebServer.EXE

    - by Aaron Daniels
    I have a solution with many projects. On Debug, I have three web projects that I want to start up on their own Cassini ASP.NET Web Development servers. In the Solution Properties - Common Properties - Startup Project, I have Multiple startup projects chosen with the three web applications' Action set to Start. All three web development servers start, and all three web pages load. However, Visual Studio is only attaching to two of the WebDev.WebServer.EXE processes. I have to manually go attach to the third process in order to debug it with the debugger. This behavior just started happening, and I'm at a loss as to how to troubleshoot this. Any help is appreciated. EDIT: Also to note, I have stopped and restarted the development servers several times with no change in behavior. Also, when attaching to the process manually, I see that the Type property of the two automatically attached WebDev.WebServer.EXE processes is Managed, while the Type property of the unattached WebDev.WebServer.EXE process is TSQL, Managed, x86. When looking at the project's properties, however, I am targeting AnyCPU, and do NOT have SQL Server debugging enabled. EDIT: Also to note, the two projects that attach correctly are C# web applications. <ProjectTypeGuids>{349c5851-65df-11da-9384-00065b846f21};{fae04ec0-301f-11d3-bf4b-00c04f79efbc}</ProjectTypeGuids> The project that is not attaching correctly is a VB.NET web application. <ProjectTypeGuids>{349c5851-65df-11da-9384-00065b846f21};{F184B08F-C81C-45F6-A57F-5ABD9991F28F}</ProjectTypeGuids> EDIT: Also to note, the behavior is the same on another workstation. So odds are that it's not a machine specific problem.

    Read the article

  • Migration for creating and deleting model in South

    - by Almad
    I've created a model and created initial migration for it: db.create_table('tvguide_tvguide', ( ('id', models.AutoField(primary_key=True)), ('date', models.DateField(_('Date'), auto_now=True, db_index=True)), )) db.send_create_signal('tvguide', ['TVGuide']) models = { 'tvguide.tvguide': { 'channels': ('models.ManyToManyField', ["orm['tvguide.Channel']"], {'through': "'ChannelInTVGuide'"}), 'date': ('models.DateField', ["_('Date')"], {'auto_now': 'True', 'db_index': 'True'}), 'id': ('models.AutoField', [], {'primary_key': 'True'}) } } complete_apps = ['tvguide'] Now, I'd like to drop it: db.drop_table('tvguide_tvguide') However, I have also deleted corresponding model. South (at least 0.6.2) is however trying to access it: (venv)[almad@eva-03 project]$ ./manage.py migrate tvguide Running migrations for tvguide: - Migrating forwards to 0002_removemodels. > tvguide: 0001_initial Traceback (most recent call last): File "./manage.py", line 27, in <module> execute_from_command_line() File "/usr/lib/python2.6/site-packages/django/core/management/__init__.py", line 353, in execute_from_command_line utility.execute() File "/usr/lib/python2.6/site-packages/django/core/management/__init__.py", line 303, in execute self.fetch_command(subcommand).run_from_argv(self.argv) File "/usr/lib/python2.6/site-packages/django/core/management/base.py", line 195, in run_from_argv self.execute(*args, **options.__dict__) File "/usr/lib/python2.6/site-packages/django/core/management/base.py", line 222, in execute output = self.handle(*args, **options) File "/home/almad/projects/mypage-all/lib/python2.6/site-packages/south/management/commands/migrate.py", line 91, in handle skip = skip, File "/home/almad/projects/mypage-all/lib/python2.6/site-packages/south/migration.py", line 581, in migrate_app result = run_forwards(mapp, [mname], fake=fake, db_dry_run=db_dry_run, verbosity=verbosity) File "/home/almad/projects/mypage-all/lib/python2.6/site-packages/south/migration.py", line 388, in run_forwards verbosity = verbosity, File "/home/almad/projects/mypage-all/lib/python2.6/site-packages/south/migration.py", line 287, in run_migrations orm = klass.orm File "/home/almad/projects/mypage-all/lib/python2.6/site-packages/south/orm.py", line 62, in __get__ self.orm = FakeORM(*self._args) File "/home/almad/projects/mypage-all/lib/python2.6/site-packages/south/orm.py", line 45, in FakeORM _orm_cache[args] = _FakeORM(*args) File "/home/almad/projects/mypage-all/lib/python2.6/site-packages/south/orm.py", line 106, in __init__ self.models[name] = self.make_model(app_name, model_name, data) File "/home/almad/projects/mypage-all/lib/python2.6/site-packages/south/orm.py", line 307, in make_model tuple(map(ask_for_it_by_name, bases)), File "/home/almad/projects/mypage-all/lib/python2.6/site-packages/south/utils.py", line 23, in ask_for_it_by_name ask_for_it_by_name.cache[name] = _ask_for_it_by_name(name) File "/home/almad/projects/mypage-all/lib/python2.6/site-packages/south/utils.py", line 17, in _ask_for_it_by_name return getattr(module, bits[-1]) AttributeError: 'module' object has no attribute 'TVGuide' Is there a way around?

    Read the article

  • ASP.NET MVC - Javascript array always passed to controller as null

    - by Xuan Vu
    I'm having some problem with passing a javascript array to the controller. I have several checkboxes on my View, when a checkbox is checked, its ID will be saved to an array and then I need to use that array in the controller. Here are the code: VIEW: var selectedSearchUsers = new Array(); $(document).ready(function () { $("#userSearch").click(function () { selectedSearchUsers.length = 0; ShowLoading(); $.ajax({ type: "POST", url: '/manage/searchusers', dataType: "json", data: $("#userSearchForm").serialize(), success: function (result) { UserSearchSuccess(result); }, cache: false, complete: function () { HideLoading(); } }); }); $(".userSearchOption").live("change", function () { var box = $(this); var id = box.attr("dataId"); var checked = box.attr("checked"); if (checked) { selectedSearchUsers.push(id); } else { selectedSearchUsers.splice(selectedSearchUsers.indexOf(id), 1); } }); $("#Send").click(function () { var postUserIDs = { values: selectedSearchUsers }; ShowLoading(); $.post("/Manage/ComposeMessage", postUserIDs, function (data) { }, "json"); }); }); When the "Send" button is clicked, I want to pass the selectedSearchUsers to the "ComposeMessage" action. Here is the Action code: public JsonResult ComposeMessage(List values) { //int count = selectedSearchUsers.Length; string count = values.Count.ToString(); return Json(count); } However, the List values is always null. Any idea why? Thank you very much.

    Read the article

  • Working around MySQL error "Deadlock found when trying to get lock; try restarting transaction"

    - by Anon Guy
    Hi all: I have a MySQL table with about 5,000,000 rows that are being constantly updated in small ways by parallel Perl processes connecting via DBI. The table has about 10 columns and several indexes. One fairly common operation gives rise to the following error sometimes: DBD::mysql::st execute failed: Deadlock found when trying to get lock; try restarting transaction at Db.pm line 276. The SQL statement that triggers the error is something like this: UPDATE file_table SET a_lock = 'process-1234' WHERE param1 = 'X' AND param2 = 'Y' AND param3 = 'Z' LIMIT 47 The error is triggered only sometimes. I'd estimate in 1% of calls or less. However, it never happened with a small table and has become more common as the database has grown. Note that I am using the a_lock field in file_table to ensure that the four near-identical processes I am running do not try and work on the same row. The limit is designed to break their work into small chunks. I haven't done much tuning on MySQL or DBD::mysql. MySQL is a standard Solaris deployment, and the database connection is set up as follows: my $dsn = "DBI:mysql:database=" . $DbConfig::database . ";host=${DbConfig::hostname};port=${DbConfig::port}"; my $dbh = DBI->connect($dsn, $DbConfig::username, $DbConfig::password, { RaiseError => 1, AutoCommit => 1 }) or die $DBI::errstr; I have seen online that several other people have reported similar errors and that this may be a genuine deadlock situation. I have two questions: What exactly about my situation is causing the error above? Is there a simple way to work around it or lessen its frequency? For example, how exactly do I go about "restarting transaction at Db.pm line 276"? Thanks in advance.

    Read the article

  • When using SendKeys()-InvalidOperationException: Undo Operation encountered...

    - by M0DC0M
    Here is my code public void KeyPress() { //Finds the target window and sends a key command to the application Process[] processes = Process.GetProcessesByName("calc"); IntPtr calculatorHandle; foreach (Process proc in processes) { calculatorHandle = proc.MainWindowHandle; if (calculatorHandle == IntPtr.Zero) { MessageBox.Show("Calculator is not running."); return; } SetForegroundWindow(calculatorHandle); break; } SendKeys.SendWait("1"); } After Executing this code I recieve an Error, i know the source is the SendKeys. Here is the full error I am Receiving System.InvalidOperationException was unhandled Message="The Undo operation encountered a context that is different from what was applied in the corresponding Set operation. The possible cause is that a context was Set on the thread and not reverted(undone)." Source="mscorlib" StackTrace: at System.Threading.SynchronizationContextSwitcher.Undo() at System.Threading.ExecutionContextSwitcher.Undo() at System.Threading.ExecutionContext.runFinallyCode(Object userData, Boolean exceptionThrown) at System.Runtime.CompilerServices.RuntimeHelpers.ExecuteBackoutCodeHelper(Object backoutCode, Object userData, Boolean exceptionThrown) at System.Runtime.CompilerServices.RuntimeHelpers.ExecuteCodeWithGuaranteedCleanup(TryCode code, CleanupCode backoutCode, Object userData) at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state) at System.Net.ContextAwareResult.Complete(IntPtr userToken) at System.Net.LazyAsyncResult.ProtectedInvokeCallback(Object result, IntPtr userToken) at System.Net.Sockets.BaseOverlappedAsyncResult.CompletionPortCallback(UInt32 errorCode, UInt32 numBytes, NativeOverlapped* nativeOverlapped) at System.Threading._IOCompletionCallback.PerformIOCompletionCallback(UInt32 errorCode, UInt32 numBytes, NativeOverlapped* pOVERLAP) InnerException: I'm not sure what the problem is, The number will appear in my calculator but that error pops up

    Read the article

  • Java FileLock for Reading and Writing

    - by bobtheowl2
    I have a process that will be called rather frequently from cron to read a file that has certain move related commands in it. My process needs to read and write to this data file - and keep it locked to prevent other processes from touching it during this time. A completely separate process can be executed by a user to (potential) write/append to this same data file. I want these two processes to play nice and only access the file one at a time. The nio FileLock seemed to be what I needed (short of writing my own semaphore type files), but I'm having trouble locking it for reading. I can lock and write just fine, but when attempting to create lock when reading I get a NonWritableChannelException. Is it even possible to lock a file for reading? Seems like a RandomAccessFile is closer to what I need, but I don't see how to implement that. Here is the code that fails: FileInputStream fin = new FileInputStream(f); FileLock fl = fin.getChannel().tryLock(); if(fl != null) { System.out.println("Locked File"); BufferedReader in = new BufferedReader(new InputStreamReader(fin)); System.out.println(in.readLine()); ... The exception is thrown on the FileLock line. java.nio.channels.NonWritableChannelException at sun.nio.ch.FileChannelImpl.tryLock(Unknown Source) at java.nio.channels.FileChannel.tryLock(Unknown Source) at Mover.run(Mover.java:74) at java.lang.Thread.run(Unknown Source) Looking at the JavaDocs, it says Unchecked exception thrown when an attempt is made to write to a channel that was not originally opened for writing. But I don't necessarily need to write to it. When I try creating a FileOutpuStream, etc. for writing purposes it is happy until I try to open a FileInputStream on the same file.

    Read the article

  • py2app, pyObjc & macports compilation errors

    - by Neewok
    Hi, I'm currently writing a small python app that embeds cherrypy and django using py2app. It worked well until I tried to include pyobjc in my project, since my app needed a small GUI (which consists of a small icon in the top menu bar + a drop down menu). I can run my python script without any problem (I'm using python 2.6 with macports), but I can't launch the application bundle generated by py2app. A dialog box appears with the following message: ImportError: dlopen(/Users/denis/tlon/standalone/mac/dist/django_cherry.app/Contents/Resources/lib/python2.6/lib-dynload/CoreFoundation/_inlines.so, 2): no suitable image found. Did find: /Users/denis/tlon/standalone/mac/dist/django_cherry.app/Contents/Resources/lib/python2.6/lib-dynload/CoreFoundation/_inlines.so: mach-o, but wrong architecture I did a quick : sudo port -u install py26-pyobjc +universal but for some reason macports tries to build openssl, with which compilation fails each time. It seems the problem is related to zLib - this is what appears in the logs : :info:build ld: warning: in /opt/local/lib/libz.dylib, file is not of required architecture ...And here is the output of file /opt/local/lib/libz.dylib : /opt/local/lib/libz.dylib: Mach-O universal binary with 2 architectures /opt/local/lib/libz.dylib (for architecture x86_64): Mach-O 64-bit dynamically linked shared library x86_64 /opt/local/lib/libz.dylib (for architecture i386): Mach-O dynamically linked shared library i386 Nothing looks wrong to me. I'm a bit stuck here. I don't even understand what openssl has to do with pyObjc, but it looks like I can't go anywhere if I don't manage to compile it. Macports really suck sometimes :/ EDIT I manage to fix Macports issue, but not py2app one. If I get it right, py2app try to create a 32-bits app, while Core Foundation files on Snow Leopard are for 64 bits architectures. Damn. Either I build this on Leopard, either I have to find a way to create a 64bit app with py2app, but then Snow Leopard only.

    Read the article

  • Using maven-release-plugin to tag and commit to non-origin

    - by Ali G
    When I do a release of my project, I want to share the source with a wider group of people than I normally do during development. The code is shared via a Git repository. To do this, I have used the following: remote public repository - released code is pushed here, every week or so (http://example.com/public) remote private repository - non-release code is pushed here, more than daily (http://example.com/private) In my local git repository, I have the following remotes defined: origin http://example.com/private public http://example.com/public I am currently trying to configure the maven-release-plugin to manage versioning of the builds, and to manage tagging and pushing of code to the public repository. In my pom.xml, I have listed the <scm/ as follows: <scm><connection>scm:git:http://example.com/public</connection></scm> (Removing this line will cause mvn release:prepare to fail) However, when calling mvn release:clean release:prepare release:perform Maven calls git push origin tagname rather than pushing to the URL specified in the POM. So the questions are: Best practice: Should I just be tagging and committing in my private repo (origin), and pushing to public manually? Can I make Maven push to the repository that I choose, rather than defaulting to origin? I felt this was implied by the requirement of the <connection/ element in <scm/.

    Read the article

  • perl multiple tasks problem

    - by Alice Wozownik
    I have finished my earlier multithreaded program that uses perl threads and it works on my system. The problem is that on some systems that it needs to run on, thread support is not compiled into perl and I cannot install additional packages. I therefore need to use something other than threads, and I am moving my code to using fork(). This works on my windows system in starting the subtasks. A few problems: How to determine when the child process exits? I created new threads when the thread count was below a certain value, I need to keep track of how many threads are running. For processes, how do I know when one exits so I can keep track of how many exist at the time, incrementing a counter when one is created and decrementing when one exits? Is file I/O using handles obtained with OPEN when opened by the parent process safe in the child process? I need to append to a file for each of the child processes, is this safe on unix as well. Is there any alternative to fork and threads? I tried use Parallel::ForkManager, but that isn't installed on my system (use Parallel::ForkManager; gave an error) and I absolutely require that my perl script work on all unix/windows systems without installing any additional modules.

    Read the article

< Previous Page | 103 104 105 106 107 108 109 110 111 112 113 114  | Next Page >