Search Results

Search found 4140 results on 166 pages for 'alias analysis'.

Page 57/166 | < Previous Page | 53 54 55 56 57 58 59 60 61 62 63 64  | Next Page >

  • 11 ADF Mobile Apps in 30 Hours

    - by Shay Shmeltzer
    The Oracle ADF Mobile team took part in a special "hackathon" this weekend, where 11 teams of new college hires who joined Oracle lately spent 30 hours building enterprise mobile applications leveraging ADF Mobile. One important thing to note - none of the participants worked with Oracle ADF Mobile before! In fact 90% of them didn't develop with ADF previously. All they had is a 2 hour training session before the event - and that's all they needed. From that point on they were able to build great cross device mobile applications. So what did they build? Here are some examples: A mileage expense tracking system: An ad campaign analysis system An expense report entry system Bug tracking system with data analysis: Carpooling social system: College Hiring system with CV scanning: Shipment management system for Farmers: Project time entry system: For sale post-it system (with item location): Conference event experience system with conference map and twitter feed integration: It was great to see how fast developers were able to learn and leverage ADF Mobile - and how creative the teams were. Here they are in action: So how about you? What would you build next? What would be your first ADF Mobile application? Start today!

    Read the article

  • The Workshop Technique Handbook

    - by llowitz
    The #OUM method pack contains a plethora of information, but if you browse through the activities and tasks contained in OUM, you will see very few references to workshops.  Consequently, I am often asked whether OUM supports a workshop-type approach.  In general, OUM does not prescribe the manner in which tasks should be conducted, as many factors such as culture, availability of resources, potential travel cost of attendees, can influence whether a workshop is appropriate in a given situation.  Although workshops are not typically called out in OUM, OUM encourages the project manager to group the OUM tasks in a way that makes sense for the project. OUM considers a workshop to be a technique that can be applied to any OUM task or group of tasks.  If a workshop is conducted, it is important to identify the OUM tasks that are executed during the workshop.  For example, a “Requirements Gathering Workshop” is quite likely to Gathering Business Requirements, Gathering Solution Requirements and perhaps Specifying Key Structure Definitions. Not only is a workshop approach to conducting the OUM tasks perfectly acceptable, OUM provides in-depth guidance on how to maximize the value of your workshops.  I strongly encourage you to read the “Workshop Techniques Handbook” included in the OUM Manage Focus Area under Method Resources. The Workshop Techniques Handbook provides valuable information on a variety of workshop approaches and discusses the circumstances in which each type of workshop is most affective.  Furthermore, it provides detailed information on how to structure a workshops and tips on facilitating the workshop. You will find guidance on some popular workshop techniques such as brainstorming, setting objectives, prioritizing and other more specialized techniques such as Value Chain Analysis, SWOT analysis, the Delphi Technique and much more. Workshops can and should be applied to any type of OUM project, whether that project falls within the Envision, Manage or Implementation focus areas.  If you typically employ workshops to gather information, walk through a business process, develop a roadmap or validate your understanding with the client, by all means continue utilizing them to conduct the OUM tasks during your project, but first take the time to review the Workshop Techniques Handbook to refresh your knowledge and hone your skills. 

    Read the article

  • Code Measuring and Metrics Tools?

    - by David
    I'm in the process of setting up a build server for personal projects. This server will handle all the normal CI stuff, including running large suites of tests (unit, integration, automated UI). While I'm working out the kinks for including code coverage output with MSTest, it occurs to me that there may be lots of tools out there which give me additional metrics other than just code coverage. FxCop comes to mind as an example. Though I'm sure there are others. Anything that can generate useful reportable data and metrics would be good. Whether it's class dependency charts (looking for Law of Demeter violations, for example), analyses of the uses of classes/functions (looking for a function that isn't used in the system other than just the tests, for example), and so on. I'm not sure the right way to formulate the question, since polling questions or "What's your favorite code analysis tool" aren't very good. But I'm essentially just looking for recommendations on what metrics to gather and the tools that can gather them. The eventual vision for something like this is to have the CI server run a bunch of automated tests and analysis tools and track performance metrics over time. Imagine a dashboard full of graphs plotting these metrics over time. The lines should all relatively be at an equilibrium, and if one starts to stray toward the negative then it's an early indication of problems with the code. In the age old struggle to quantify code quality with management, this sounds like a potentially helpful means of doing just that.

    Read the article

  • Prognostications for the Future of BI

    - by jacqueline.coolidge(at)oracle.com
    Dashboard Insight has published the viewpoints on the future of BI from several vendors' perspectives including ours at Business Intelligence Predictions for 2011 We offered: In 2011, businesses will demand more from BI.  With intense competitive and economic pressures, it's not enough to be interesting.  BI must be actionable and enable people to respond smarter and faster to the opportunities and challenges of the day.  Most companies rely on BI to help them understand what's going on in their business.  Many are ready to make the leap from "What's going on?" to "What are we going to do about it?" Seamless integration from reporting to what-if analysis and scenario modeling helps businesses decide the right course of action.  The integration of BI with SOA and BPEL will deliver the true payoff for BI by enabling companies to initiate business processes directly from their analysis, turning insight to action for more agile and competitive business.  And, I must admit, it's tough to argue with the trends identified by other vendors. Enabling true self-service and engaging a larger community of users Accelerating the adoption of BI on mobile devices Embracing more advanced analytics such as data/text mining and location intelligence Price/performance breakthroughs It's singing to the choir.  I look forward to hearing the voices of some customers who are pushing the envelope and will post those stories as I capture them.  

    Read the article

  • What are the boundaries of the product owner in scrum?

    - by Saeed Neamati
    In another question, I asked about why I feel scrum turns active developers into passive developers, and it seems that the overall problem is not scrumy (related to scrum), and rather it's related to the bad implementation of scrum. So, here I have some questions about the scope of the responsibilities of PO (product owner) and the limitations he/she shouldn't pass. Should PO interfere the UI design, when there are designers at work in scrum team? (an example of this which has happened to us, is to replace checkboxes with a drop down list with two items, namely, yes and no; or to make some boxes larger, or to left-align some content instead of centering them on the page, or stuff like that). If yeah, to what extent? Colors? Layout? Should PO interfere in Design and architecture of coding? This hasn't happened to us yet, but I'm really curious about the boundaries. For example does PO has the right to change the platform (moving from ASP.NET MVC to PHP, or something like that), or choosing the count of servers (tier architecture), etc. Should PO interfere in validation mechanisms? For example, this field should be required, or we don't need to get this piece of information from user. Sometimes, analyzers and designers confirm that something can be handled behind the scene, like extracting the user profile info from another source, instead of asking for it in UI. How granular could/should PO get into the analysis and design? For example, a user story might be: "As a customer, I'd like to be able to buy new domains online". However, scrum team can implement this user story in a wizard of five steps, or in one single page. To which level PO should monitor, or govern, or supervise the technical analysis, design, and implementation? I asked these questions to judge whether our implementation is right or wrong?

    Read the article

  • github team workflow - to fork or not?

    - by aporat
    We're a small team of web developers currently using subversion but soon we're making a switch to github. I'm looking at different types of github workflows, and we're not sure if the whole forking concept in github for each developer is such a good idea for us. If we use forks, I understand each developer will have his own private remote & local repositories. I'm worried it will make pushing changesets hard and too complex. Also, my biggest concern is that it will force each developer to have 2 remotes: origin (which is the remote fork) and an upstream (which is used to "sync" changes from the main repository). Not sure if it's such a easy way to do things. This is similar to the workflow explained here: https://github.com/usm-data-analysis/usm-data-analysis.github.com/wiki/Git-workflow If we don't use forks, we can probably get by fine by using a central repo creating a branch for each task we're working on, and merge them into the development branch on the same repository. It means we won't be able to restrict merging of branches and might be a little messy to have many branches on the central repository. Any suggestions from teams who tried both workflow?

    Read the article

  • Stakeholder Management in OUM

    - by user719921
    Where is Stakeholder Management in OUM?  Stakeholder Management typically falls into the purview of the Project Manager, which means much of the associated guidance is found in the OUM Manage Focus Area (a.k.a. Manage).  There is no process in Manage named Stakeholder Management, but this “touch point” can be found in a variety of other processes including Bid Transition (BT), Communication Management (CMM) and Organizational Change Management (OCHM). •         Stakeholder management starts in the Bid Transition process with Stakeholder Analysis •         This Stakeholder Analysis is used to build the Project Team Communication Plan in the Communication Management process. •         Stakeholder management should be executed during the Execution and Control phase.  For example, as issues are resolved, the project manager should take the action item to follow up with the affected stakeholders to ensure they are aware that the issue has been resolved. •       The broader topic of Stakeholder management is also addressed very thoroughly in the Organizational Change Management process in the Implement Focus Area, which is a touch point to the Organizational Change Management process in Manage. Check it out and let me know your thoughts!

    Read the article

  • Mocking concrete class - Not recommended

    - by Mik378
    I've just read an excerpt of "Growing Object-Oriented Software" book which explains some reasons why mocking concrete class is not recommended. Here some sample code of a unit-test for the MusicCentre class: public class MusicCentreTest { @Test public void startsCdPlayerAtTimeRequested() { final MutableTime scheduledTime = new MutableTime(); CdPlayer player = new CdPlayer() { @Override public void scheduleToStartAt(Time startTime) { scheduledTime.set(startTime); } } MusicCentre centre = new MusicCentre(player); centre.startMediaAt(LATER); assertEquals(LATER, scheduledTime.get()); } } And his first explanation: The problem with this approach is that it leaves the relationship between the objects implicit. I hope we've made clear by now that the intention of Test-Driven Development with Mock Objects is to discover relationships between objects. If I subclass, there's nothing in the domain code to make such a relationship visible, just methods on an object. This makes it harder to see if the service that supports this relationship might be relevant elsewhere and I'll have to do the analysis again next time I work with the class. I can't figure out exactly what he means when he says: This makes it harder to see if the service that supports this relationship might be relevant elsewhere and I'll have to do the analysis again next time I work with the class. I understand that the service corresponds to MusicCentre's method called startMediaAt. What does he mean by "elsewhere"? The complete excerpt is here: http://www.mockobjects.com/2007/04/test-smell-mocking-concrete-classes.html

    Read the article

  • What are the hard and fast rules for Cache Control?

    - by Metalshark
    Confession: sites I maintain have different rules for Cache Control mostly based on the default configuration of the server followed up with recommendations from the Page Speed & Y-Slow Firefox plug-ins and the Network Resources view in Google's Speed Tracer. Cache-Control is set to private/public depending on what they say to do, ETag's/Last-Modified headers are only tinkered with if Y-Slow suggests there is something wrong and Vary-Accept-Encoding seems necessary when manually gziping files for Amazon CloudFront. When reading through the material on the different options and what they do there seems to be conflicting information, rules for broken proxies and cargo cult configurations. Any of the official information provided by the analysis tools mentioned above is quite inaccessible as it deals with each topic individually instead of as a unified strategy (so there is no cross-referencing of techniques). For example, it seems to make no sense that the speed analysis tools rate a site with ETag's the same as a site without them if they are meant to help with caching. What are the hard and fast rules for a platform agnostic Cache Control strategy? EDIT: A link through Jeff Atwood's article explains Caching in superb depth. For the record though here are the hard and fast rules: If the file is Compressed using GZIP, etc - use "cache-control: private" as a proxy may return the compressed version to a client that does not support it (the browser cache will hold files marked this way though). Also remember to include a "Vary: Accept-Encoding" to say that it is compressible. Use Last-Modified in conjunction with ETag - belt and braces usage provides both validators, whilst ETag is based on file contents instead of modification time alone, using both covers all bases. NOTE: AOL's PageTest has a carte blanche approach against ETags for some reason. If you are using Apache on more than one server to host the same content then remove the implicitly declared inode from ETags by excluding it from the FileETag directive (i.e. "FileETag MTime Size") unless you are genuinely using the same live filesystem. Use "cache-control: public" wherever you can - this means that proxy servers (and the browser cache) will return your content even if the rest of the page needs HTTP authentication, etc.

    Read the article

  • OSB, Service Callouts and OQL

    - by Sabha
    Oracle Fusion Middleware customers use Oracle Service Bus (OSB) for virtualizing Service endpoints and implementing stateless service orchestrations. Behind the performance and speed of OSB, there are a couple of key design implementations that can affect application performance and behavior under heavy load. One of the heavily used feature in OSB is the Service Callout pipeline action for message enrichment and invoking multiple services as part of one single orchestration. Overuse of this feature, without understanding its internal implementation, can lead to serious problems. This series will delve into OSB internals, the problem associated with usage of Service Callout under high loads, diagnosing it via thread dump and heap dump analysis using tools like ThreadLogic and OQL (Object Query Language) and resolving it. The first section in the series will mainly cover the threading model used internally by OSB for implementing Route Vs. Service Callouts. The second section of the "OSB, Service Callouts and OQL" blog posting will delve into thread dump analysis of OSB server and detecting threading issues relating to Service Callout and using Heap Dump and OQL to identify the related Proxies and Business services involved. The final section of the series will focus on the corrective action to avoid Service Callout related OSB serer hangs. Before we dive into the solution, we need to briefly discus about Work Managers in WLS. Please refer to the blog posting for more details.

    Read the article

  • C# class architecture for REST services

    - by user15370
    Hi. I am integrating with a set of REST services exposed by our partner. The unit of integration is at the project level meaning that for each project created on our partners side of the fence they will expose a unique set of REST services. To be more clear, assume there are two projects - project1 and project2. The REST services available to access the project data would then be: /project1/search/getstuff?etc... /project1/analysis/getstuff?etc... /project1/cluster/getstuff?etc... /project2/search/getstuff?etc... /project2/analysis/getstuff?etc... /project2/cluster/getstuff?etc... My task is to wrap these services in a C# class to be used by our app developer. I want to make it simple for the app developer and am thinking of providing something like the following class. class ProjectClient { SearchClient _searchclient; AnalysisClient _analysisclient; ClusterClient _clusterclient; string Project {get; set;} ProjectClient(string _project) { Project = _project; } } SearchClient, AnalysisClient and ClusterClient are my classes to support the respective services shown above. The problem with this approach is that ProjectClient will need to provide public methods for each of the API's exposed by SearchClient, etc... public void SearchGetStuff() { _searchclient.getStuff(); } Any suggestions how I can architect this better?

    Read the article

  • E-Bus ATG Advisor Webcast program - June Edition

    - by cwarticki
    E-Business Suite Applications Technology Group (ATG) Advisor Webcast Program – June 2014 Thursday June 12, 2014 at 18:00 UK / 10:00 PST / 11:00 MST / 13:00 EST EBS Patch Wizard Overview ·      What is Oracle EBS Patch Wizard Tool? ·      Benefits of the Patch Wizard utility ·      Patch Wizard Interface ·      Patch Impact Analysis Details & Registration : Note 1672371.1 Direct registration link Thursday June 26, 2014 at 18:00 UK / 10:00 PST / 11:00 MST / 13:00 EST EBS Proactive Analyzers Overview ·         What are Oracle Support Analyzers? ·         How to identify the Analyzers available? ·         Short Analyzers Overview ·         Patch Impact Analysis ·         How to use an Analyzer? Details & Registration : Note 1672363.1 Direct registration link If you have any question about the schedules or if you have a suggestion for an Advisor Webcast to be planned in future, please send an E-Mail to Ruediger Ziegler.

    Read the article

  • Webmatrix The Site has Stopped Fix

    - by Tarun Arora
    I just got started with AzureWebSites by creating a website by choosing the Wordpress template. Next I tried to install WebMatrix so that I could run the website locally. Every time I tried to run my website from WebMatrix I hit the message “The following site has stopped ‘xxx’” Step 00 – Analysis It took a bit of time to figure out that WebMatrix makes use of IISExpress. But it was easy to figure out that IISExpress was not showing up in the system tray when I started WebMatrix. This was a good indication that IISExpress is having some trouble starting up. So, I opened CMD prompt and tried to run IISExpress.exe this resulted in the below error message So, I ran IISExpress.exe /trace:Error this gave more detailed reason for failure Step 1 – Fixing “The following site has stopped ‘xxx’” Further analysis revealed that the IIS Express config file had been corrupted. So, I navigated to C:\Users\<UserName>\Documents\IISExpress\config and deleted the files applicationhost.config, aspnet.config and redirection.config (please take a backup of these files before deleting them). Come back to CMD and run IISExpress /trace:Error IIS Express successfully started and parked itself in the system tray icon. I opened up WebMatrix and clicked Run, this time the default site successfully loaded up in the browser without any failures. Step 2 – Download WordPress Azure WebSite using WebMatrix Because the config files ‘applicationhost.config’, ‘aspnet.config’ and ‘redirection.config’ were deleted I lost the settings of my Azure based WordPress site that I had downloaded to run from WebMatrix. This was simple to sort out… Open up WebMatrix and go to the Remote tab, click on Download Export the PublishSettings file from Azure Management Portal and upload it on the pop up you get when you had clicked Download in the previous step Now you should have your Azure WordPress website all set up & running from WebMatrix. Enjoy!

    Read the article

  • github team workflow - to fork or not?

    - by aporat
    We're a small team of web developers currently using subversion but soon we're making a switch to github. I'm looking at different types of github workflows, and we're not sure if the whole forking concept in github for each developer is such a good idea for us. If we use forks, I understand each developer will have his own private remote & local repositories. I'm worried it will make pushing changesets hard and too complex. Also, my biggest concern is that it will force each developer to have 2 remotes: origin (which is the remote fork) and an upstream (which is used to "sync" changes from the main repository). Not sure if it's such a easy way to do things. This is similar to the workflow explained here: https://github.com/usm-data-analysis/usm-data-analysis.github.com/wiki/Git-workflow If we don't use forks, we can probably get by fine by using a central repo creating a branch for each task we're working on, and merge them into the development branch on the same repository. It means we won't be able to restrict merging of branches and might be a little messy to have many branches on the central repository. Any suggestions from teams who tried both workflow?

    Read the article

  • A Dozen USB Chargers Analyzed; Or: Beware the Knockoffs

    - by Jason Fitzpatrick
    When it comes to buying a USB charger one is just as good as another so you might as well buy the cheapest one, right? This interesting and detailed analysis of name brand, off-brand, and counterfeit chargers will have you rethinking that stance. Ken Shirriff gathered up a dozen USB chargers including official Apple chargers, counterfeit Apple chargers, as well as offerings from Monoprice, Belkin, Motorola, and other companies. After putting them all through a battery of tests he gave them overall rankings based on nine different categories including power stability, power quality, and efficiency. The take away from his research? Quality varied widely between brands but when sticking with big companies like Apple or HP the chargers were all safe. The counterfeit chargers (like the $2 Apple iPad charger knock-off he tested) proved to be outright dangerous–several actually melted or caught fire in the course of the project. Hit up the link below for his detailed analysis including power output readings for the dozen chargers. A Dozen USB Chargers in the Lab [via O'Reilly Radar] 6 Start Menu Replacements for Windows 8 What Is the Purpose of the “Do Not Cover This Hole” Hole on Hard Drives? How To Log Into The Desktop, Add a Start Menu, and Disable Hot Corners in Windows 8

    Read the article

  • Smart Phones Shockingly Energy Efficient; Lead to Decreased Household Power Consumption

    - by Jason Fitzpatrick
    Given how often our smart phones and tablets spend plugged in and topping off their battery reserves, it’s easy to assume they’re sucking down a lot of power. Analysis shows the lilliputian but powerful devices are surprisingly efficient and may be decreasing our overall power consumption. Courtesy of energy-centric blog Outlier, we’re treated to a look at the power sipping habits of popular smart phones and mobile devices. The simple take away? They use shockingly little electricity over the course of the year–you can charge your new iPhone for a year of regular usage for under a buck. The more complex analysis? The proliferation of tiny and energy efficient devices is displacing heavier energy consumers (large televisions, desktop computers, etc.) and driving a more efficient gadget-to-consumption ratio is many households. Hit up the link below to read the full post. How Much Does It Take to Charge an iPhone [via Mashable] 7 Ways To Free Up Hard Disk Space On Windows HTG Explains: How System Restore Works in Windows HTG Explains: How Antivirus Software Works

    Read the article

  • Spend Analytics on a Grand Scale

    - by jacqueline.coolidge(at)oracle.com
    The Wall St. Journal reports in Billions in Bloat Uncovered in Beltway that a recent study by Government Accountability Office (GAO) has released a massive study of several programs and agencies that cost U.S. taxpayers billions of dollars each year.  This report help save $100 to $200 Billion dollars by identifying duplicate spending and ineffective programs that can be consolidated or eliminated. Now, that is spend analytics on a massive scale! It remains to be seen how actionable that information will be.  Certainly, there have been studies before that identify wasteful spending.  But, it’s a great case of the power of business intelligence and spend analytics.   Many companies do find significant savings when they implement spend and procurement analytics. What makes for an excellent spend analysis? It should be: Objective and provide visibility across programs and/or divisions A cross functional analysis that links financial with performance metrics Prescriptive and actionable Spend and procurement analytics have been HOT during the economic downturn! I expect 2011 will see many more companies get serious about spend analytics and would love to hear from companies who are willing to share their experience.

    Read the article

  • Deploying Django on EC2 using Bitnami Djangostack: WSGI script cannot be loadded

    - by Arman
    I've been struggling to deploy Django application on Amazon EC2 using Bitnami Djangostack for the last couple of days. When I go to http://dewey.io I see the default bitnami page (/opt/bitnami/apache2/htdocs/index.html), however, when I open http://dewey.io/portnoy, I get 'Internal Server Error'. But it's known that if mod_wsgi is setup correctly, the DocumentRoot value from httpd.conf is ignored, thus, I should see my Django application when accessing http://dewey.io. Essentially, the main error is this - 'Target WSGI script cannot be loaded as Python module'. Two questions: 1) any ideas how to fix these mod_wsgi errors (the Apache logs are below)? 2) how to disable the default /opt/bitnami/apache2/htdocs/index.html page and show my homepage from django application when accessing http://dewey.io? Thank you in advance! The details On my EC2 instance I"m running 64-bit Ubuntu 12.04 with DjangoStack 1.4-1. My Django project is located here - /opt/bitnami/apps/django/django_projects/portnoy. root@dewey:/opt/bitnami/apps/django/django_projects/portnoy# ls manage.py README.md settings.py site_media users Procfile sandbox static test.py topics urls.py views.py __init__.pyc templates testviews.py Apache error logs (/opt/bitnami/apache2/logs/error_log): [Wed Jul 04 02:29:00 2012] [error] [client 140.180.6.212] File does not exist: /opt/bitnami/apache2/htdocs/favicon.ico [Wed Jul 04 02:29:15 2012] [error] [client 140.180.6.212] mod_wsgi (pid=3990): Target WSGI script '/opt/bitnami/apps/django/scripts/django.wsgi' cannot be loaded as Python module. [Wed Jul 04 02:29:15 2012] [error] [client 140.180.6.212] mod_wsgi (pid=3990): Exception occurred processing WSGI script '/opt/bitnami/apps/django/scripts/django.wsgi'. [Wed Jul 04 02:29:15 2012] [error] [client 140.180.6.212] Traceback (most recent call last): [Wed Jul 04 02:29:15 2012] [error] [client 140.180.6.212] File "/opt/bitnami/apps/django/scripts/django.wsgi", line 8, in <module> [Wed Jul 04 02:29:15 2012] [error] [client 140.180.6.212] import django.core.handlers.wsgi [Wed Jul 04 02:29:15 2012] [error] [client 140.180.6.212] File "/opt/bitnami/apps/django/lib/python2.7/site-packages/django/core/handlers/wsgi.py", line 8, in <module> [Wed Jul 04 02:29:15 2012] [error] [client 140.180.6.212] from django import http [Wed Jul 04 02:29:15 2012] [error] [client 140.180.6.212] File "/opt/bitnami/apps/django/lib/python2.7/site-packages/django/http/__init__.py", line 119, in <module> [Wed Jul 04 02:29:15 2012] [error] [client 140.180.6.212] from django.http.multipartparser import MultiPartParser [Wed Jul 04 02:29:15 2012] [error] [client 140.180.6.212] File "/opt/bitnami/apps/django/lib/python2.7/site-packages/django/http/multipartparser.py", line 13, in <module> [Wed Jul 04 02:29:15 2012] [error] [client 140.180.6.212] from django.utils.text import unescape_entities [Wed Jul 04 02:29:15 2012] [error] [client 140.180.6.212] File "/opt/bitnami/apps/django/lib/python2.7/site-packages/django/utils/text.py", line 4, in <module> [Wed Jul 04 02:29:15 2012] [error] [client 140.180.6.212] from gzip import GzipFile [Wed Jul 04 02:29:15 2012] [error] [client 140.180.6.212] File "/opt/bitnami/python/lib/python2.7/gzip.py", line 10, in <module> [Wed Jul 04 02:29:15 2012] [error] [client 140.180.6.212] import io [Wed Jul 04 02:29:15 2012] [error] [client 140.180.6.212] File "/opt/bitnami/python/lib/python2.7/io.py", line 60, in <module> [Wed Jul 04 02:29:15 2012] [error] [client 140.180.6.212] import _io [Wed Jul 04 02:29:15 2012] [error] [client 140.180.6.212] ImportError: /opt/bitnami/python/lib/python2.7/lib-dynload/_io.so: undefined symbol: PyUnicodeUCS2_AsEncodedString [Wed Jul 04 02:29:15 2012] [error] [client 140.180.6.212] File does not exist: /opt/bitnami/apache2/htdocs/favicon.ico [Wed Jul 04 02:44:00 2012] [error] [client 140.180.6.212] File does not exist: /opt/bitnami/apache2/htdocs/favicon.ico Let me quickly introduce the contents of the files to make the case more concrete. This is my /etc/apache2/sites-available/default file <VirtualHost *:80> ServerAdmin [email protected] ServerName dewey.io Alias /site_media/ /opt/bitnami/apps/django/django_projects/portnoy/site_media/ Alias /static/ /opt/bitnami/apps/django/lib/python2.7/site-packages/django/contrib/admin/static/ Alias /robots.txt /opt/bitnami/apps/django/django_projects/portnoy/site_media/robots.txt Alias /favicon.ico /opt/bitnami/apps/django/django_projects/portnoy/site_media/favicon.ico CustomLog "|/usr/sbin/rotatelogs /opt/bitnami/apps/django/django_projects/logs/access.log.%Y%m%d-%H%M%S 5M" combined ErrorLog "|/usr/sbin/rotatelogs /opt/bitnami/apps/django/django_projects/logs/error.log.%Y%m%d-%H%M%S 5M" LogLevel warn WSGIProcessGroup dewey.io WSGIScriptAlias / /opt/bitnami/apps/django/scripts/django.wsgi <Directory /opt/bitnami/apps/django/django_projects/portnoy/site_media> Order deny,allow Allow from all Options -Indexes FollowSymLinks </Directory> <Directory /opt/bitnami/apps/django/django_projects/portnoy/conf/apache> Order deny,allow Allow from all </Directory> </VirtualHost> This is my /opt/bitnami/apps/django/scripts/django.wsgi file import os, sys sys.path.append('/opt/bitnami/apps/django/lib/python2.7/site-packages/') sys.path.append('/opt/bitnami/apps/django/django_projects') sys.path.append('/opt/bitnami/apps/django/django_projects/portnoy') os.environ['DJANGO_SETTINGS_MODULE'] = 'portnoy.settings' import django.core.handlers.wsgi application = django.core.handlers.wsgi.WSGIHandler() Here is the relevant portion of /opt/bitnami/apache2/conf/httpd.conf file: ServerRoot "/opt/bitnami/apache2" Listen 80 ServerName dewey.io DocumentRoot "/opt/bitnami/apache2/htdocs" LoadModule wsgi_module modules/mod_wsgi.so WSGIPythonHome /opt/bitnami/python Include "/opt/bitnami/apache2/conf/ssi.conf" Include "/opt/bitnami/apps/django/conf/django.conf" Include "/opt/bitnami/apache2/conf/bitnami/httpd.conf"

    Read the article

  • Delphi TBytesField - How to see the text properly - Source is HIT OLEDB AS400

    - by myitanalyst
    We are connecting to a multi-member AS400 iSeries table via HIT OLEDB and HIT ODBC. You connect to this table via an alias to access a specific multi-member. We create the alias on the AS400 this way: CREATE ALIAS aliasname FOR table(membername) We can then query each member of the table this way: SELECT * FROM aliasname We are testing this in Delphi6 first, but will move it to D2010 later We are using HIT OLEDB for the AS400. We are pulling down records from a table and the field is being seen as a tBytesField. I have also tried ODBC driver and it sees as tBytesField as well. Directly on the AS400 I can query the data and see readable text. I can use the iSeries Navigation tool and see readable text as well. However when I bring it down to the Delphi client via the HIT OLEDB or HIT ODBC and try to view via asString then I just see unreadable text.. something like this: ñðð@ðõñððððñ÷@õôððõñòøóóöøñðÂÁÕÒ@ÖÆ@ÁÔÅÙÉÃÁ@@@@@@@@ÂÈÙÉâãæÁðòñè@ÔK@k@ÉÕÃK@@@@@@@@@ç I jumbled up the text above, but that is the character types that show up. When I did a test in D2010 the text looks like japanse or chinese characters, but if I display as AnsiString then it looks like what it does in Delphi 6. I am thinking this may have something to do with code pages or character sets, but I have no experience in this are so it is new to me if it is related. When I look at the Coded Character Set on the AS400 it is set to 65535. What do I need to do to make this text readable? We do have a third party component (Delphi400) that makes things behave in a more native AS400 manner. When I use its AS400 connection and AS400 query components it shows the field as a tStringField and displays just fine. BUT we are phasing out this product (for a number of reasons) and would really like the OLEDB with the ADO components work. Just for clarification the HIT OLEDB with tADOQuery do have some fields showing as tStringFields for many of the other tables we use... not sure why it is showing as a tBytesField in this case. I am not an AS400 expert, but looking at the field definititions on the AS400 the ones showing up as tBytesField look the same as the ones showing up as tStringFields... but there must be a difference. Maybe due to being a multi-member? So... does anyone have any guidance on how to get the correct string data that is readable? If you need more info please ask. Greg

    Read the article

  • Configuring Unity with a closed generic constructor parmater

    - by fearofawhackplanet
    I've been trying to read the article here but I still can't understand it. I have a constructor resembling the following: IOrderStore orders = new OrderStore(new Repository<Order>(new OrdersDataContext())); The constructor for OrderStore: public OrderStore(IRepository<Order> orderRepository) Constructor for Repository<T>: public Repository(DataContext dataContext) How do I set this up in the Unity config file? UPDATE: I've spent the last few hours banging my head against this, and although I'm not really any closer to getting it right I think at least I can be a little more specific about the problem. I've got my IRespository<T> working ok: <typeAlias alias="IRepository" type="MyAssembly.IRepository`1, MyAssembly" /> <typeAlias alias="Repository" type="MyAssembly.Repository`1, MyAssembly" /> <typeAlias alias="OrdersDataContext" type="MyAssembly.OrdersDataContext, MyAssembly" /> <types> <type type="OrdersDataContext"> <typeConfig> <constructor /> <!-- ensures paramaterless constructor used --> </typeConfig> </type> <type type="IRepository" mapTo="Repository"> <typeConfig> <constructor> <param name="dataContext" parameterType="OrdersDataContext"> <dependency /> </param> </constructor> </typeConfig> </type> </types> So now I can get an IRepository like so: IRepository rep = _container.Resolve(); and that all works fine. The problem now is when trying to add the configuration for IOrderStore <type type="IOrderStore" mapTo="OrderStore"> <typeConfig> <constructor> <param name="ordersRepository" parameterType="IRepository"> <dependency /> </param> </constructor> </typeConfig> </type> When I add this, Unity blows up when trying to load the config file. The error message is OrderStore does not have a constructor that takes the parameters (IRepository`1). What I think this is complaining about is because the OrderStore constructor takes a closed IRepository generic type, ie OrderStore(IRepository<Order>) and not OrderStore(IRepository<T>) I don't have any idea how to resolve this.

    Read the article

  • Apache/Django subdomains problem

    - by Thomas
    Now I have apache configuration which works only with localhost domain (http://localhost/). Alias /media/ "/sciezka/do/instalacji/django/contrib/admin/media/" Alias /site_media/ "/sciezka/do/plikow/site_media/" <Location "/"> SetHandler python-program PythonHandler django.core.handlers.modpython SetEnv DJANGO_SETTINGS_MODULE settings PythonPath "['/thomas/django_projects/project'] + sys.path" PythonDebug On </Location> <Location "/site_media"> SetHandler none </Location> How can I make it working for some subdomains like pl.localhost or uk.localhost? This subdomains should display the same page what domain (localhost). Second question: It is possible change default localhost address (http://localhost/) to (http://localhost.com/) or (http://www.localhost.com/) or something else?

    Read the article

  • Apache/Django subdomains problem

    - by Thomas
    Now I have apache configuration which works only with localhost domain (http://localhost/). Alias /media/ "/sciezka/do/instalacji/django/contrib/admin/media/" Alias /site_media/ "/sciezka/do/plikow/site_media/" <Location "/"> SetHandler python-program PythonHandler django.core.handlers.modpython SetEnv DJANGO_SETTINGS_MODULE settings PythonPath "['/thomas/django_projects/project'] + sys.path" PythonDebug On </Location> <Location "/site_media"> SetHandler none </Location> How can I make it working for some subdomains like pl.localhost or uk.localhost? This subdomains should display the same page what domain (localhost). Second question: It is possible change default localhost address (http://localhost/) to (http://localhost.com/) or (http://www.localhost.com/) or something else?

    Read the article

  • working with generic lifetime managers in unity config section

    - by Martin Bailey
    I have the following generic lifetime manager public class RequestLifetimeManager<T> : LifetimeManager, IDisposable { public override object GetValue() { return HttpContext.Current.Items[typeof(T).AssemblyQualifiedName]; } public override void RemoveValue() { HttpContext.Current.Items.Remove(typeof(T).AssemblyQualifiedName); } public override void SetValue(object newValue) { HttpContext.Current.Items[typeof(T).AssemblyQualifiedName] = newValue; } public void Dispose() { RemoveValue(); } } How do I reference this in the unity config section. Creating a type alias <typeAlias alias="requestLifeTimeManager`1" type=" UI.Common.Unity.RequestLifetimeManager`1, UI.Common" /> and specifying it as a lifetime manager <types> <type type="[interface]" mapTo="[concretetype]" > <lifetime type="requestLifeTimeManager`1" /> </type> </types> causes the following error Cannot create an instance of UI.Common.Unity.RequestLifetimeManager`1[T] because Type.ContainsGenericParameters is true. How do you reference generic lifetime managers ?

    Read the article

  • Apache2 several sites and configuration question

    - by Hellnar
    Hello I want to host several sites from my VPS via apache2 under Debian. For this I removed the default from sites-enabled and added this under /etc/apache/sites-enabled/www.mysite.com : NameVirtualHost *:80 <VirtualHost *:80> ServerName mysite.com ServerAlias www.mysite.com Alias /media/ /home/myuser/mysite/media/ Alias /admin_media/ /home/myuser/django/Django-1.2/django/contrib/admin/media/ WSGIScriptAlias / /home/myuser/mysite/wsgi.py ErrorLog /home/myuser/mysite/logs/error.log CustomLog /home/myuser/mysite/logs/access.log combined </VirtualHost> After that I removed the default symbolic link from sites-enabled/ via: a2dissite default and added the new one: a2ensite www.mysite.com Still, I get this error: Restarting web server: apache2apache2: apr_sockaddr_info_get() failed for myuser apache2: Could not reliably determine the server's fully qualified domain name, using 127.0.0.1 for ServerName [Sat May 29 11:41:13 2010] [warn] NameVirtualHost *:80 has no VirtualHosts apache2: apr_sockaddr_info_get() failed for myuser apache2: Could not reliably determine the server's fully qualified domain name, using 127.0.0.1 for ServerName [Sat May 29 11:41:13 2010] [warn] NameVirtualHost *:80 has no VirtualHosts . What can be the problem ?

    Read the article

  • Nested queries in Arel

    - by Schrockwell
    I am attempting to nest SELECT queries in Arel and/or Active Record in Rails 3 to generate the following SQL statement. SELECT sorted.* FROM (SELECT * FROM points ORDER BY points.timestamp DESC) AS sorted GROUP BY sorted.client_id An alias for the subquery can be created by doing points = Table(:points) sorted = points.order('timestamp DESC').alias but then I'm stuck as how to pass it into the parent query (short of calling #to_sql, which sounds pretty ugly). How do you use a SELECT statement as a sub-query in Arel (or Active Record) to accomplish the above? Maybe there's an altogether different way to accomplish this query that doesn't use nested queries?

    Read the article

< Previous Page | 53 54 55 56 57 58 59 60 61 62 63 64  | Next Page >