Search Results

Search found 2454 results on 99 pages for 'domains'.

Page 88/99 | < Previous Page | 84 85 86 87 88 89 90 91 92 93 94 95  | Next Page >

  • Windows Azure Recipe: Consumer Portal

    - by Clint Edmonson
    Nearly every company on the internet has a web presence. Many are merely using theirs for informational purposes. More sophisticated portals allow customers to register their contact information and provide some level of interaction or customer support. But as our understanding of how consumers use the web increases, the more progressive companies are taking advantage of social web and rich media delivery to connect at a deeper level with the consumers of their goods and services. Drivers Cost reduction Scalability Global distribution Time to market Solution Here’s a sketch of how a Windows Azure Consumer Portal might be built out: Ingredients Web Role – this will host the core of the solution. Each web role is a virtual machine hosting an application written in ASP.NET (or optionally php, or node.js). The number of web roles can be scaled up or down as needed to handle peak and non-peak traffic loads. Database – every modern web application needs to store data. SQL Azure databases look and act exactly like their on-premise siblings but are fault tolerant and have data redundancy built in. Access Control (optional) – if identity needs to be tracked within the solution, the access control service combined with the Windows Identity Foundation framework provides out-of-the-box support for several social media platforms including Windows LiveID, Google, Yahoo!, Facebook. It also has a provider model to allow integration with other platforms as well. Caching (optional) – for sites with high traffic with lots of read-only data and lists, the distributed in-memory caching service can be used to cache and serve up static data at higher scale and speed than direct database requests. It can also be used to manage user session state. Blob Storage (optional) – for sites that serve up unstructured data such as documents, video, audio, device drivers, and more. The data is highly available and stored redundantly across data centers. Each entry in blob storage is provided with it’s own unique URL for direct access by the browser. Content Delivery Network (CDN) (optional) – for sites that service users around the globe, the CDN is an extension to blob storage that, when enabled, will automatically cache frequently accessed blobs and static site content at edge data centers around the world. The data can be delivered statically or streamed in the case of rich media content. Training Labs These links point to online Windows Azure training labs where you can learn more about the individual ingredients described above. (Note: The entire Windows Azure Training Kit can also be downloaded for offline use.) Windows Azure (16 labs) Windows Azure is an internet-scale cloud computing and services platform hosted in Microsoft data centers, which provides an operating system and a set of developer services which can be used individually or together. It gives developers the choice to build web applications; applications running on connected devices, PCs, or servers; or hybrid solutions offering the best of both worlds. New or enhanced applications can be built using existing skills with the Visual Studio development environment and the .NET Framework. With its standards-based and interoperable approach, the services platform supports multiple internet protocols, including HTTP, REST, SOAP, and plain XML SQL Azure (7 labs) Microsoft SQL Azure delivers on the Microsoft Data Platform vision of extending the SQL Server capabilities to the cloud as web-based services, enabling you to store structured, semi-structured, and unstructured data. Windows Azure Services (9 labs) As applications collaborate across organizational boundaries, ensuring secure transactions across disparate security domains is crucial but difficult to implement. Windows Azure Services provides hosted authentication and access control using powerful, secure, standards-based infrastructure. See my Windows Azure Resource Guide for more guidance on how to get started, including links web portals, training kits, samples, and blogs related to Windows Azure.

    Read the article

  • Why do I need two Instances in Windows Azure?

    - by BuckWoody
    Windows Azure as a Platform as a Service (PaaS) means that there are various components you can use in it to solve a problem: Compute “Roles” - Computers running an OS and optionally IIS - you can have more than one "Instance" of a given Role Storage - Blobs, Tables and Queues for Storage Other Services - Things like the Service Bus, Azure Connection Services, SQL Azure and Caching It’s important to understand that some of these services are Stateless and others maintain State. Stateless means (at least in this case) that a system might disappear from one physical location and appear elsewhere. You can think of this as a cashier at the front of a store. If you’re in line, a cashier might take his break, and another person might replace him. As long as the order proceeds, you as the customer aren’t really affected except for the few seconds it takes to change them out. The cashier function in this example is stateless. The Compute Role Instances in Windows Azure are Stateless. To upgrade hardware, because of a fault or many other reasons, a Compute Role's Instance might stop on one physical server, and another will pick it up. This is done through the controlling fabric that Windows Azure uses to manage the systems. It’s important to note that storage in Azure does maintain State. Your data will not simply disappear - it is maintained - in fact, it’s maintained three times in a single datacenter and all those copies are replicated to another for safety. Going back to our example, storage is similar to the cash register itself. Even though a cashier leaves, the record of your payment is maintained. So if a Compute Role Instance can disappear and re-appear, the things running on that first Instance would stop working. If you wrote your code in a Stateless way, then another Role Instance simply re-starts that transaction and keeps working, just like the other cashier in the example. But if you only have one Instance of a Role, then when the Role Instance is re-started, or when you need to upgrade your own code, you can face downtime, since there’s only one. That means you should deploy at least two of each Role Instance not only for scale to handle load, but so that the first “cashier” has someone to replace them when they disappear. It’s not just a good idea - to gain the Service Level Agreement (SLA) for our uptime in Azure it’s a requirement. We point this out right in the Management Portal when you deploy the application: (Click to enlarge) When you deploy a Role Instance you can also set the “Upgrade Domain”. Placing Roles on separate Upgrade Domains means that you have a continuous service whenever you upgrade (more on upgrades in another post) - the process looks like this for two Roles. This example covers the scenario for upgrade, so you have four roles total - One Web and one Worker running the "older" code, and one of each running the new code. In all those Roles you want at least two instances, and this example shows that you're covered for High Availability and upgrade paths: The take-away is this - always plan for forward-facing Roles to have at least two copies. For Worker Roles that do background processing, there are ways to architect around this number, but it does affect the SLA if you have only one.

    Read the article

  • The gestures of Windows 8 (Consumer preview): part 2, More about Search

    - by Laurent Bugnion
    This is part 2 of a multipart blog post about the gestures and shortcuts in Windows 8 consumer preview. Part 1 can be found here! More about the Search charm In the first installment of this series, we talked about the charms and mentioned a few gestures to display the Search charm. Search is a very central and powerful feature in Windows 8, and allows you to search in Apps, Settings, Files and within Metro applications that support the Search contract. There are a few cool features around the Search, and especially the applications associated to it. I already mentioned the keyboard shortcuts you can use: Win-C shows the Charms bar (same as swiping from the right bevel towards the center of the screen). Win-Q open the Search fly out with Apps preselected. Win-W open the Search fly out with Settings preselected. Win-F open the Search fly out with Files preselected. Searching in Metro apps In addition to these three search domains, you can also search a Metro app, as long as it supports the Search contract (check this Build video to learn more about the Search contract). These apps show up in the Search flyout as shown here: Notice the list of apps below the Files button? That’s what we are talking about. First of all, the list order changes when you search in some applications. For instance, in the image above, I had used the Store with the Search charm. This is why the store shows up as the first app. I am not 100% what algorithm is used here (sorting according to number of searches is my guess), but try it out and try to figure it out Applications that have never been searched are sorted alphabetically. Does it mean we will see cool app names like ___AAA_MyCoolApp? I certainly hope not!! Pinning You can also pin often used apps to the Search flyout. To pin an app with the mouse, right click on it in the Search flyout and select Pin from the context menu. With the keyboard, use the arrow keys to go down to the selected app, and then open the context menu. With the finger, simply tap and hold until you see a semi transparent rectangle indicating that the context menu will be shown, then release. The context menu opens up and you can select Pin. Pin context menu Pinned apps Unpinning, Hiding Using the same technique as for pinning here above, you can also unpin a pinned application. Finally, you can also choose to hide an app from the Search flyout altogether. This is a convenient way to clean up and make it easy to find stuff. Note: At this point, I am not sure how to re-add a hidden app to the Search flyout. If anyone knows, please mention it in the comments, thanks! Reordering You can also reorder pinned apps. To do this, with the finger, tap, hold and pull the app to the side, then pull it vertically to reorder it. You can also reorder with the mouse, simply by clicking on an app and pulling it vertically to the place you want to put it. I don’t think there is a way to do that with the keyboard though. That’s it for now More gestures will follow in a next installment! Have fun with Windows 8   Laurent Bugnion (GalaSoft) Subscribe | Twitter | Facebook | Flickr | LinkedIn

    Read the article

  • Deploying Oracle ADF Essentials Applications to Glassfish

    - by Shay Shmeltzer
    With the new Oracle ADF Essentials offering you can now deploy applications that leverage Oracle ADF on the open source Glassfish 3.1 server. Deployment is documented in the official JDeveloper and ADF documentation (here) but below is a summary of the steps and a video of the steps you'll need to take to get a basic Oracle ADF Essentials application to work on GlassFish. Note - to make starting/stopping GlassFish easier for my demo I used my GlassFish extension that you can get here. First we'll install some ADF Runtime libraries on GlassFish Download and install Glassfish (Note - if you also have an Oracle DB on the same machine, you'll want to switch GlassFish's HTTP port to something else instead of 8080). Download the Oracle ADF Essentials packaging - this will get you an adf_essentials.zip file. Copy the adf_essentials.zip to the lib directory of your Glassfish domain - on a default windows install this would be: C:\glassfish3\glassfish\domains\domain1\lib Go the the above lib directory and issue a unzip -j adf_essentials.zip This will extract the ADF libraries to the directory. Now you can start the Glassfish server. Now let's configure Glassfish to handle applications of the ADF type: Invoke the admin console of glassfish (http://localhost:4848) and log into your admin account. Go to Configurations->Server-config->JVM Settings and choose the JVM Options tab Add the following entries: -XX:MaxPermSize=512m (note this entry should already exist so just make sure it has a big enough value) -Doracle.mds.cache=simple While we are in the admin console, we can also define JDBC connections that will be used by our application. Go into Resources->JDBC->JDBC Connection Pools and click to create a New one Give it a name and choose the resource type to be javax.sql.XADataSource and choose Oracle as the Database Driver vendor. Click Next Scroll down to the Additional Properties section and start filling in the information for your database. The values for an Oracle XE will be (user=hr, databaseName = XE, Password=hr, ServerName=localhost, DriverType=thin, PortNumber=1521) Click Finish Click Ping to check your connection works. Now define a new JDBC Resource that will use the pool you just defined. In my example I called the resource jdbc/HRDS You will need this name to match the name in your Application Module connection configuraiton.Now you can re-start the Glassfish server for the changes to take effect. Get an ADF application going (you can use the regular Fusion Application template for this) Go into the project properties of your viewController project, under the deployment section click to edit the deployment profile that is defined there. Go to Platform and choose Glassfish 3.1 from the drop down list. Click ok to go back to your project. Go to Application -> Application Properties-> Deployment Go to Platform and choose Glassfish 3.1 from the drop down list. Click ok to go back to your project. This step will make sure that JDeveloper will autoamtically add the necessary ADF libraries to the EAR file that is being generated for deployment on Glassfish  Go to your Application->Deploy and deploy either to an EAR file or directly to a Glassfish server connection that you created. Things should just work, but if they don't then look up the server.log in the log directory and check out what error is in there. Here is a video demo of the various steps: Note - right now the deployment of an ADF application takes about 2 minutes on my machine we are hoping to be able to improve this timing in the future. People who are more familiar with Glassfish might want to explore using exploded directory deployment and see if they can get it to work.

    Read the article

  • Introduction to WebCenter Personalization: &ldquo;The Conductor&rdquo;

    - by Steve Pepper
    There are some new faces in the town of WebCenter with the latest 11g PS3 release.  A new component has introduced itself as "Oracle WebCenter Personalization", a.k.a WCP, to simplify delivery of a personalized experience and content to end users.  This posting reviews one of the primary components within WCP: "The Conductor". The Conductor: This ain't just an ordinary cloud... One of the founding principals behind WebCenter Personalization was to provide an open client-side API that remains independent of the technology invoking it, in addition to independence from the architecture running it.  The Conductor delivers this, and much, much more. The Conductor is the engine behind WebCenter Personalization that allows flow-based documents, called "Scenarios", to be managed and executed on the server-side through a well published and RESTful api.      The Conductor also supports an extensible model for custom provider integration that can be easily invoked within a Scenario to promote seamless integration with existing business assets. Introducing the Scenario Conductor Scenarios are declarative offline-authored documents using the custom Personalization JDeveloper bundle included with WebCenter.  A Scenario contains one (or more) statements that can: Create variables that are scoped to the current execution context Iterate over collections, or loop until a specific condition is met Execute one or more statements when a condition is met Invoke other scenarios that exist within the same namespace Invoke a data provider that integrates with custom applications Once a variable is assigned within the Scenario's execution context, it can be referenced anywhere within the same Scenario using the common Expression Language syntax used in J2EE web containers. Scenarios are then published and tested to the Integrated WebLogic Server domain, or published remotely to other domains running WebCenter Personalization. Various Client-side Models The Conductor server API is built upon RESTful services that support a wide variety of clients able to communicate over HTTP.  The Conductor supports the following client-side models: REST:  Popular browser-based languages can be used to manage and execute Conductor Scenarios.  There are other public methods to retrieve configured provider metadata that can be used by custom applications. The Conductor currently supports XML and JSON for it's API syntax. Java: WebCenter Personalization delivers a robust and light-weight java client with the popular Jersey framework as it's foundation.  It has never been easier to write a remote java client to manage remote RESTful services. Expression Language (EL): Allow the results of Scenario execution to control your user interface or embed personalized content using the session-scoped managed bean.  The EL client can also be used in straight JSP pages with minimal configuration. Extensible Provider Framework The Conductor supports a pluggable provider framework for integrating custom code with Scenario execution.  There are two types of providers supported by the Conductor: Function Provider: Function Providers are simple java annotated classes with static methods that are meant to be served as utilities.  Some common uses would include: object creation or instantiation, data transformation, and the like.  Function Providers can be invoked using the common EL syntax from variable assignments, conditions, and loops. For example:  ${myUtilityClass:doStuff(arg1,arg2))} If you are familiar with EL Functions, Function Providers are based on the same concept. Data Provider: Like Function Providers, Data Providers are annotated java classes, but they must adhere to a much more strict object model.  Data Providers have access to a wealth of Conductor services, such as: Access to namespace-scoped configuration API that can be managed by Oracle Enterprise Manager, Scenario execution context for expression resolution, and more.  Oracle ships with three out-of-the-box data providers that supports integration with: Standardized Content Servers(CMIS),  Federated Profile Properties through the Properties Service, and WebCenter Activity Graph. Useful References If you are looking to immediately get started writing your own application using WebCenter Personalization Services, you will find the following references helpful in getting you on your way: Personalizing WebCenter Applications Authoring Personalized Scenarios in JDeveloper Using Personalization APIs Externally Implementing and Calling Function Providers Implementing and Calling Data Providers

    Read the article

  • Why googling by keycaptcha gives results on reCAPTCHA? [closed]

    - by vgv8
    EDIT: I'd like to change this title to: How to STOP Google's manipulation of Google search engine presented to general public? I am frequently googling and more and more frequently bump when searching by one software product I am given instead the results on Google's own products. For ex., if I google by keyword keycaptcha for the "Past 24 hours" (after clicking on "Show search tools" -- "Past 24 hours" on the left sidebar of a browser) I am getting the Google's search results show only results on reCAPTCHA. Image uploaded later: Though, if confine keycaptcha in quotes the results are "correct" (well, kind of since they are still distorted in comparison with other search engines). I checked this during few months from different domains at different ISPs, different operating systems and from a dozen of browsers. The results are the same. Why is it and how can it be possibly corrected? My related posts: "How Gmail spam filter works?" IP adresses blacklisting Update: It is impossible for me to directly start using google.com as I am always redirected to google.ru (from google.com) by my ip-address "auto-detect location" google's "convenience". The google's help tells that it is impossible to switch off my location auto-detection because it is very helpful feature. There is a work-around to use google.com/ncr (to get google.com) (?anybody know what does it mean) to prevent redirection from google.com but even. But all results are exactly the same OK, I can search by quoted "keycaptcha", I am already accustomed to these google's quirks, but the question arises why the heck to burn time promoting someone's product if GOOGLE uses other product brands for showing its own interests/brands (reCAPTCHA) instead and what can be done with it? The general user will not understand that he was cheated and just will pick up the first (wrong) results Update2: Note that this googling behaviour: is independent on whether I am logged-in (or log-out-ed of) a google account, which account, on browser (I tried Opera, Chrome, FireFox, IE of different versions, Safari), OS or even domain; there are many such cases but I just targeted one concrete restricted example speciffically to to prevent wandering between unrelated details and peculiarities; @Michael, first it is not true and this text contains 2 links for real and significant results.. I also wrote that this is just one concrete example from many and based on many-month exp. These distortions happen upon clicking on: Past 24 hours, Past week, Past month, Past year in many other keywords, occasions/configurations of searches, etc. Second, the absence of the results is the result and there is no point to sneakingly substitute it by another unsolicited one. It is the definition of spam and scam. 3d, the question is not abt workarounds like how to write search queries or use another searching engines. The question is how to straighten the googling's results in order to stop disorienting general public about. Update: I could not understand: nobody reproduces the described by me behavior (i.e. when I click "Past 24 hours" link in google search searching for keycaptcha, the presented results are only on reCAPTCHA presented)? Update: And for the "Past week":

    Read the article

  • Consumer Oriented Search In Oracle Endeca Information Discovery - Part 2

    - by Bob Zurek
    As discussed in my last blog posting on this topic, Information Discovery, a core capability of the Oracle Endeca Information Discovery solution enables businesses to search, discover and navigate through a wide variety of big data including structured, unstructured and semi-structured data. With search as a core advanced capabilities of our product it is important to understand some of the key differences and capabilities in the underlying data store of Oracle Endeca Information Discovery and that is our Endeca Server. In the last post on this subject, we talked about Exploratory Search capabilities along with support for cascading relevance. Additional search capabilities in the Endeca Server, which differentiate from simple keyword based "search boxes" in other Information Discovery products also include: The Endeca Server Supports Set Search.  The Endeca Server is organized around set retrieval, which means that it looks at groups of results (all the documents that match a search), as well as the relationship of each individual result to the set. Other approaches only compute the relevance of a document by comparing the document to the search query – not by comparing the document to all the others. For example, a search for “U.S.” in another approach might match to the title of a document and get a high ranking. But what if it were a collection of government documents in which “U.S.” appeared in many titles, making that clue less meaningful? A set analysis would reveal this and be used to adjust relevance accordingly. The Endeca Server Supports Second-Order Relvance. Unlike simple search interfaces in traditional BI tools, which provide limited relevance ranking, such as a list of results based on key word matching, Endeca enables users to determine the most salient terms to divide up the result. Determining this second-order relevance is the key to providing effective guidance. Support for Queries and Filters. Search is the most common query type, but hardly complete, and users need to express a wide range of queries. Oracle Endeca Information Discovery also includes navigation, interactive visualizations, analytics, range filters, geospatial filters, and other query types that are more commonly associated with BI tools. Unlike other approaches, these queries operate across structured, semi-structured and unstructured content stored in the Endeca Server. Furthermore, this set is easily extensible because the core engine allows for pluggable features to be added. Like a search engine, queries are answered with a results list, ranked to put the most likely matches first. Unlike “black box” relevance solutions, which generalize one strategy for everyone, we believe that optimal relevance strategies vary across domains. Therefore, it provides line-of-business owners with a set of relevance modules that let them tune the best results based on their content. The Endeca Server query result sets are summarized, which gives users guidance on how to refine and explore further. Summaries include Guided Navigation® (a form of faceted search), maps, charts, graphs, tag clouds, concept clusters, and clarification dialogs. Users don’t explicitly ask for these summaries; Oracle Endeca Information Discovery analytic applications provide the right ones, based on configurable controls and rules. For example, the analytic application might guide a procurement agent filtering for in-stock parts by visualizing the results on a map and calculating their average fulfillment time. Furthermore, the user can interact with summaries and filters without resorting to writing complex SQL queries. The user can simply just click to add filters. Within Oracle Endeca Information Discovery, all parts of the summaries are clickable and searchable. We are living in a search driven society where business users really seem to enjoy entering information into a search box. We do this everyday as consumers and therefore, we have gotten used to looking for that box. However, the key to getting the right results is to guide that user in a way that provides additional Discovery, beyond what they may have anticipated. This is why these important and advanced features of search inside the Endeca Server have been so important. They have helped to guide our great customers to success. 

    Read the article

  • Google.com and clients1.google.com/generate_204

    - by David Murdoch
    I was looking into google.com's Net activity in firebug just because I was curious and noticed a request was returning "204 No Content." It turns out that a 204 No Content "is primarily intended to allow input for actions to take place without causing a change to the user agent's active document view, although any new or updated metainformation SHOULD be applied to the document currently in the user agent's active view." Whatever. I've looked into the JS source code and saw that "generate_204" is requested like this: (new Image).src="http://clients1.google.com/generate_204" No variable declaration/assignment at all. My first idea is that it was being used to track if Javascript is enabled. But the "(new Image).src='...'" call is called from a dynamically loaded external JS file anyway, so that would be pointless. Anyone have any ideas as to what the point could be? UPDATE "/generate_204" appears to be available on many google services/servers (e.g., maps.google.com/generate_204, maps.gstatic.com/generate_204, etc...). You can take advantage of this by pre-fetching the generate_204 pages for each google-owned service your web app may use. Like This: window.onload = function(){ var two_o_fours = [ // google maps domain ... "http://maps.google.com/generate_204", // google maps images domains ... "http://mt0.google.com/generate_204", "http://mt1.google.com/generate_204", "http://mt2.google.com/generate_204", "http://mt3.google.com/generate_204", // you can add your own 204 page for your subdomains too! "http://sub.domain.com/generate_204" ]; for(var i = 0, l = two_o_fours.length; i < l; ++i){ (new Image).src = two_o_fours[i]; } };

    Read the article

  • Random DNS Client Issue with BIND9/Windows Server 2003 DNS

    - by upkels
    Within our office, we have a local server running DNS, for internal related "domains", (e.g. .internal, .office, .lan, .vpn, etc.). Randomly, only the hosts configured with those extensions will stop resolving on the Windows-based workstations. Sometimes it'll work for a couple weeks without issue on one machine, then suddenly stop working, or it'll happen on another 15 times per day. It's completely random for all workstations. When troubleshooting, I have opened up a command prompt, and issued various nslookup commands for some of these hosts, and they resolve, however I've been told that nslookup uses different "libraries" for name resolution than other applications such as web browsers, email clients, etc. The only solution thus far, is manually restarting the Windows DNS Client on each workstation when this happens. Issuing the ipconfig /flushdns command multiple times helps every now and then, but is not successful enough to even attempt before restarting the DNS Client. I have tried two different DNS servers; BIND9, and Windows Server 2003 R2 DNS, and the behavior is the same. We have a single Netgear JGS524 switch all workstations and servers are connected to within the office, and a Linksys SR224G switch in another department with workstations attached.

    Read the article

  • OS X, Mercurial and MediaTemple problem

    - by bschaeffer
    I've installed Mercurial per MT's knowledge base file here. Working with it server side using ssh from my Mac works fine. I can initialize repositories and the like, but pulling from the server or pushing from my Mac produces an error I don't understand. Here's what I get when call hg push from my local installation (hash marks represent my server number): remote: Traceback (most recent call last): remote: File "/home/#####/users/.home/data/mercurial-1.5/hg", line 27, in ? remote: mercurial.dispatch.run() remote: File "/nfs/c05/h01/mnt/#####/data/mercurial-1.5/mercurial/dispatch.py", line 16, in run remote: sys.exit(dispatch(sys.argv[1:])) remote: File "/nfs/c05/h01/mnt/#####/data/mercurial-1.5/mercurial/dispatch.py", line 21, in dispatch remote: u = _ui.ui() remote: File "/nfs/c05/h01/mnt/#####/data/mercurial-1.5/mercurial/ui.py", line 38, in __init__ remote: for f in util.rcpath(): remote: File "/nfs/c05/h01/mnt/#####/data/mercurial-1.5/mercurial/util.py", line 1200, in rcpath remote: _rcpath = os_rcpath() remote: File "/nfs/c05/h01/mnt/#####/data/mercurial-1.5/mercurial/util.py", line 1174, in os_rcpath remote: path = system_rcpath() remote: File "/nfs/c05/h01/mnt/#####/data/mercurial-1.5/mercurial/posix.py", line 41, in system_rcpath remote: path.extend(rcfiles(os.path.dirname(sys.argv[0]) + remote: File "/nfs/c05/h01/mnt/#####/data/mercurial-1.5/mercurial/posix.py", line 30, in rcfiles remote: rcs.extend([os.path.join(rcdir, f) remote: File "/nfs/c05/h01/mnt/#####/data/mercurial-1.5/mercurial/demandimport.py", line 75, in __getattribute__ remote: self._load() remote: File "/nfs/c05/h01/mnt/#####/data/mercurial-1.5/mercurial/demandimport.py", line 47, in _load remote: mod = _origimport(head, globals, locals) remote: ImportError: No module named osutil abort: no suitable response from remote hg! Mercurial on my Mac is configured as follows [ui] username = John Smith editor = te -w remotecmd = ~/data/mercurial-1.5/hg My local single repo is configured as follows (hash marks represent my server number): [paths] default = ssh://mysite.com@s#####.gridserver.com/domains/mysite.com/html Mercurial on the server is configured with a just a username: [ui] username = John Smith The server .bash_profile is configured as follows (per the installation guide): # Aliases alias ls-a='ls -a -l' # Added this as suggested by the MediaTemple guide export PYTHONPATH=${HOME}/lib/python:$PYTHONPATH export PATH=${HOME}/bin:$PATH I understand this probably isn't a MediaTemple problem, but more likely an installation problem. I would really appreciate any assitance on this problem. Thanks in advance!

    Read the article

  • search all paths and the shortest path for a graph - Prolog

    - by prologian
    Hi , I have a problem in my code with turbo prolog wich search all paths and the shortest path for a graph between 2 nodes the problem that i have is to test if the node is on the list or not exactly in the clause of member and this is my code : /* 1 ---- b ---- 3 --- | --- --- | ----- a |5 d --- | ----- --- | --- 2 --- | --- 4 -- c -- for example we have for b--->c ([b,c],5) , ([b,a,c],3) and ([b,d,c],7) : possible paths. ([b,a,c],3) : the shortest path. */ DOMAINS list=Symbol * PREDICATES distance(Symbol,Symbol) path1(Symbol,Symbol,list,integer) path(Symbol,Symbol,list,list,integer) distance(Symbol,list,integer) member(Symbol,list) shortest(Symbol,Symbol,list,integer) CLAUSES distance(a,b,1). distance(a,c,2). distance(b,d,3). distance(c,d,4). distance(b,c,5). distance(b,a,1). distance(c,a,2). distance(d,b,3). distance(d,c,4). distance(c,b,5). member(X, [Y|T]) :- X = Y; member(X, T). absent(X,L) :-member(X, L),!,fail. absent(_,_). /*find all paths*/ path1(X, Y, L, C):- path(X, Y, L, I, C). path(X, X, [X], I, C) :- absent(X, I). path(X, Y, [X|R], I, C) :- distance(X, Z, A) , absent(Z, I), path(Z, Y, R, [X|I] ,C1) , C=C1+A . /*to find the shortest path*/ shortest(X, Y, L, C):-path(X, Y, L, C),path(X, Y, L1, C1),C<C1.

    Read the article

  • How to validate xml using a .dtd via a proxy and NOT using system.net.defaultproxy

    - by Lanceomagnifico
    Hi, Someone else has already asked a somewhat similar question: http://stackoverflow.com/questions/1888887/validate-an-xml-file-against-a-dtd-with-a-proxy-c-2-0/2766197#2766197 Here's my problem: We have a website application that needs to use both internal and external resources. We have a bunch of internal webservices. Requests to the CANNOT go through the proxy. If we try to, we get 404 errors since the proxy DNS doesn't know about our internal webservice domains. We generate a few xml files that have to be valid. I'd like to use the provided dtd documents to validate the xml. The dtd urls are outside our network and MUST go through the proxy. Is there any way to validate via dtd through a proxy without using system.net.defaultproxy? If we use defaultproxy, the internal webservices are busted, but the dtd validation works.# Here is what I'm doing to validate the xml right now: public static XDocument ValidateXmlUsingDtd(string xml) { var xrSettings = new XmlReaderSettings { ValidationType = ValidationType.DTD, ProhibitDtd = false }; var sr = new StringReader(xml.Trim()); XmlReader xRead = XmlReader.Create(sr, xrSettings); return XDocument.Load(xRead); } Ideally, there would be some way to assign a proxy to the XmlReader much like you can assign a proxy to the HttpWebRequest object. Or perhaps there is a way to programatically turn defaultproxy on or off? So that I can just turn it on for the call to Load the Xdocument, then turn it off again? FYI - I'm open to ideas on how to tackle this - note that the proxy is located in another domain, and they don't want to have to set up a dns lookup to our dns server for our internal webservice addresses. Cheers, Lance

    Read the article

  • Javascript errors from Google Adsense

    - by Arjun
    Hey there, On several of my adsense running sites, I have been getting the following errors: Unable to post message to [http://]googleads.g.doubleclick.net. Recipient has origin http://www.anekdotz.com. Unsafe JavaScript attempt to access frame with URL [http://]www.anekdotz.com/ from frame with URL [http://]googleads.g.doubleclick.net/pagead/ads?client=ca-pub-9099580055602120&output=html&h=250&slotname=9210181593&w=300&flash=10.0.42&url=http%3A%2F%2Fwww.anekdotz.com%2F&dt=1269901036429&correlator=1269901036438&frm=0&ga_vid=711000587.1269901037&ga_sid=1269901037&ga_hid=654061172&ga_fc=0&u_tz=-240&u_his=2&u_java=1&u_h=900&u_w=1440&u_ah=878&u_aw=1436&u_cd=24&u_nplug=10&u_nmime=101&biw=1365&bih=806&eid=44901212&fu=0&ifi=1&dtd=153&xpc=Xkfk1oufPQ&p=http%3A//www.anekdotz.com. Domains, protocols and ports must match. (from the Chrome javascript console) The ads seem to show properly and it doesn't affect my native javascript code. However sometimes these errors seem to slow down page loading. How can I fix this problem? (I modified the URLs to let me post this as I'm a new user)

    Read the article

  • ASP.NET Setting Culture with InitializeCulture

    - by Helen
    I have a website with three domains .com, .de and .it Each domain needs to default to the local language/culture of the country. I have created a base page and added an InitializeCulture Protected Overrides Sub InitializeCulture() Dim url As System.Uri = Request.Url Dim hostname As String = url.Host.ToString() Dim SelectedLanguage As String If HttpContext.Current.Profile("PreferredCulture").ToString Is Nothing Then Select Case hostname Case "www.domain.de" SelectedLanguage = "de" Thread.CurrentThread.CurrentUICulture = New CultureInfo(SelectedLanguage) Thread.CurrentThread.CurrentCulture = CultureInfo.CreateSpecificCulture(SelectedLanguage) Case "www.domain.it" SelectedLanguage = "it" Thread.CurrentThread.CurrentUICulture = New CultureInfo(SelectedLanguage) Thread.CurrentThread.CurrentCulture = CultureInfo.CreateSpecificCulture(SelectedLanguage) Case Else SelectedLanguage = "en" Thread.CurrentThread.CurrentUICulture = New CultureInfo(SelectedLanguage) Thread.CurrentThread.CurrentCulture = CultureInfo.CreateSpecificCulture(SelectedLanguage) End Select End If End Sub This is fine. The problem now occurs because we also want three language selection buttons on the home page so that the user can override the domain language. So on my Default.asp.vb we have three button events like this... Protected Sub langEnglish_Click(ByVal sender As Object, ByVal e As System.Web.UI.ImageClickEventArgs) Handles langEnglish.Click Dim SelectedLanguage As String = "en" 'Save selected user language in profile HttpContext.Current.Profile.SetPropertyValue("PreferredCulture", SelectedLanguage) 'Force re-initialization of the page to fire InitializeCulture() Context.Server.Transfer(Context.Request.Path) End Sub But of course the InititalizeCulture then overrides whatever button selection has been made. Is there any way that the InitialCulture can check whether a button click has occurred and if so skip the routine? Any advice would be greatly appreciated, thanks.

    Read the article

  • Pointing non-www to a spcific sub-directory

    - by Ben Sinclair
    I might be going about this all wrong so let me know if I am. I am creating software that allows people to sign up and have their own sub-domain on my website. So say my website is ben.com, they could have their own sub-domain called juice.ben.com. When they type their sub-domain juice.ben.com in their address bar, it will load the contents in a root directory. I have also set-up a .htaccess redirect to redirect www.ben.com to ben.com. Not sure if this matters with my question but I thought I'd mention it. Ok, so basically what I think I need to do is put the software they they've signed up to in the root directory. So when someone goes to juice.ben.com, they will be pointed to the root directory (I beleive I cans et-up wild card sub-domains with my host) and the software will then analyse their sub-domain and then display their account. Now, if someone just types in ben.com into their browser, I want it to show the contents of the ben.com/_website/ folder but still show in the address bar that they are still in the root directory. Hopefully I am making sense :) Is this possible with htaccess? If so, what do I need to do?

    Read the article

  • LogonUser using LOGON32_LOGON_NEW_CREDENTIALS works against remote untrusted domain machine

    - by Jiho Han
    So between the two machines, there is no trust - they are in different domains. I've successfully connected to the remote machine using LogonUser API using logon type, *LOGON32_LOGON_NEW_CREDENTIALS*. I am able to retrieve the content of a directory using the UNC share, and create a file stream to "download" the file. So far so good. The only issue is that it seems, LogonUser fails unless there is an already open session. Let me clarify that. I found that the ASP.NET MVC page was not working this morning, specifically the page that retrieves the file list from this remote machine using LogonUser. I look at the log and I see in the stacktrace, *System.IO.__Error.WinIOError* above Directory.GetFiles call. I then remoted into the web server and tried to open the remote folder in the explorer using the same login/password used by the web site. It went through and I could see the files. I opened up the command prompt, type in net use, and I see that there is an open connection to the remote machine. Then I went back to the page and suddenly the page is working again. So, at this point, I am not exactly sure if the LogonUser is working as expected or not. If the call requires that a network connection opened first by other means, then this is certainly not satisfactory. Does anyone know what may be happening or suggest a workaround?

    Read the article

  • When should one use the following: Amazon EC2, Google App Engine, Microsoft Azure and Salesforce.com

    - by vicky21
    I am asking this in very general sense. Both from cloud provider and cloud consumer's perspective. Also the question is not for any specific kind of application (in fact the intention is to know which type of applications/domains can fit into which of the cloud slab -SaaS PaaS IaaS). My understanding so far is: IaaS: Raw Hardware (Processors, Networks, Storage). PaaS: OS, System Softwares, Development Framework, Virtual Machines. SaaS: Software Applications. It would be great if Stackoverflower's can share their understanding and experiences of cloud computing concept. EDIT: Ok, I will put it in more specific way - Amazon EC2: You don't have control over hardware layer. But you can take your choice of OS image, Dev Framework (.NET, J2EE, LAMP) and Application and put it on EC2 hardware. Can you deploy an applications built with Google App Engine or Azure on EC2? Google App Engine: You don't have control over hardware and OS and you get a specific Dev Framework to build your application. Can you take any existing Java or Python application and port it to GAE? Or vice versa, can applications that were built on GAE be taken out of GAE and ported to any Application Server like Websphere or Weblogic? Azure: You don't have control over hardware and OS and you get a specific Dev Framework to build your application. Can you take any existing .NET application and port it to Azure? Or vice versa, can applications that were built on Azure be taken out of Azure and ported to any Application Server like Biztalk?

    Read the article

  • Google App Engine on Google Apps Domain

    - by Bob Ralian
    I'm having trouble getting my domain pointed to my website hosted with google app engine. Here's the background... take care to separate the concepts of "google apps" (domain hosting, email, etc.) and "google app engine" (website framework). I have a domain that's using Google Apps for Your Domain, let's call it company.com. So my login for my google apps account is [email protected]. I have a different domain that is aliased back to my google apps account, let's call it mycompany.com. It's been successfully aliased and registered with my primary google apps account using the cname method, and has updated mx records. We have a ton of domains, and I only want to use one "google apps" account to maintain them all. Now I have a website I've built using google app engine, and the url is effectively mycompany.appspot.com. I want to get mycompany.com to point to my website that currently resides at mycompany.appspot.com. There's a spot in the google app engine dashboard under application settings where you can add a domain. So I click there and enter mycompany.com and I get an error message saying that domain is not using google apps. If I back up to the page I submitted, there's a note saying I need to register the domain with google apps. So I click the link to do that and enter mycompany.com and I get an error message saying the domain has been registered and is in the process of ownership verification. But that process is already finished. So... what do I do? Does google app engine not support a domain that is only aliased to a primary google apps account? Does mycompany.com need to have its own primary google apps account?

    Read the article

  • What is the proper query to get all the children in a tree?

    - by Nathan Adams
    Lets say I have the following MySQL structure: CREATE TABLE `domains` ( `id` INT(10) UNSIGNED NOT NULL AUTO_INCREMENT, `domain` CHAR(50) NOT NULL, `parent` INT(11) DEFAULT NULL, PRIMARY KEY (`id`) ) ENGINE=MYISAM AUTO_INCREMENT=10 DEFAULT CHARSET=latin1 insert into `domains`(`id`,`domain`,`parent`) values (1,'.com',0); insert into `domains`(`id`,`domain`,`parent`) values (2,'example.com',1); insert into `domains`(`id`,`domain`,`parent`) values (3,'sub1.example.com',2); insert into `domains`(`id`,`domain`,`parent`) values (4,'sub2.example.com',2); insert into `domains`(`id`,`domain`,`parent`) values (5,'s1.sub1.example.com',3); insert into `domains`(`id`,`domain`,`parent`) values (6,'s2.sub1.example.com',3); insert into `domains`(`id`,`domain`,`parent`) values (7,'sx1.s1.sub1.example.com',5); insert into `domains`(`id`,`domain`,`parent`) values (8,'sx2.s2.sub1.example.com',6); insert into `domains`(`id`,`domain`,`parent`) values (9,'x.sub2.example.com',4); In my mind that is enough to emulate a simple tree structure: .com | example / \ sub1 sub2 ect My problem is that give sub1.example.com I want to know all the children of sub1.example.com without using multiple queries in my code. I have tried joining the table to itself and tried to use subqueries, I can't think of anything that will reveal all the children. At work we are using MPTT to keep in hierarchal order a list of domains/subdomains however, I feel that there is an easier way to do it. I did some digging and someone did something similar but they required the use of a function in MySQL. I don't think for something simple like this we would need a whole function. Maybe I am just dumb and not seeing some sort of obvious solution. Also, feel free to alter the structure.

    Read the article

  • How to use 3rd party libraries in glassfish?

    - by Hank
    I need to connect to a MongoDB instance from my EJB3 application, running on glassfish 3.0.1. The Mongo project provides a set of drivers, and I'm able to use them in a standalone Java application. How would I use them in a JEE application? Or maybe better phrasing: how would I make a 3rd party library available to my application when it runs in an EJB container? At the moment, I'm getting a java.lang.NoClassDefFoundError when deploying a bean that tries to import from the library: [#|2010-03-24T11:42:15.164+0100|SEVERE|glassfishv3.0|global|_ThreadID=28;_ThreadName=Thread-1;|Class [ com/mongodb/DBObject ] not found. Error while loading [ class mvs.core.LocationCacheService ]|#] [#|2010-03-24T11:42:15.164+0100|WARNING|glassfishv3.0|javax.enterprise.system.tools.deployment.org.glassfish.deployment.common|_ThreadID=28;_ThreadName=Thread-1;|Error in annotation processing: java.lang.NoClassDefFoundError: com/mongodb/DBObject|#] [#|2010-03-24T11:42:15.259+0100|SEVERE|glassfishv3.0|javax.enterprise.system.core.com.sun.enterprise.v3.server|_ThreadID=28;_ThreadName=Thread-1;|Exception while loading the app org.glassfish.deployment.common.DeploymentException: java.lang.NoClassDefFoundError: com/mongodb/DBObject at org.glassfish.weld.WeldDeployer.event(WeldDeployer.java:171) at org.glassfish.kernel.event.EventsImpl.send(EventsImpl.java:125) at org.glassfish.internal.data.ApplicationInfo.load(ApplicationInfo.java:224) at com.sun.enterprise.v3.server.ApplicationLifecycle.deploy(ApplicationLifecycle.java:338) I tried adding it to the NetBeans project (Properties - Libraries - Compile - Add Jar, enable 'Package'), and I also tried manually copying the jar file to $GF_HOME/glassfish/domains/domain1/lib (where the mysql-connector already resides). Do I need to 'register' the library with the container? Reference it via Annotation? Extend the classpath of the container to include the library?

    Read the article

  • Lucene complex structure search

    - by archer
    Basically I do have pretty simple database that I'd like to index with Lucene. Domains are: // Person domain class Person { Set<Pair> keys; } // Pair domain class Pair { KeyItem keyItem; String value; } // KeyItem domain, name is unique field within the DB (!!) class KeyItem{ String name; } I've tens of millions of profiles and hundreds of millions of Pairs, however, since most of KeyItem's "name" fields duplicates, there are only few dozens KeyItem instances. Came up to that structure to save on KeyItem instances. Basically any Profile with any fields could be saved into that structure. Lets say we've profile with properties - name: Andrew Morton - eduction: University of New South Wales, - country: Australia, - occupation: Linux programmer. To store it, we'll have single Profile instance, 4 KeyItem instances: name, education,country and occupation, and 4 Pair instances with values: "Andrew Morton", "University of New South Wales", "Australia" and "Linux Programmer". All other profile will reference (all or some) same instances of KeyItem: name, education, country and occupation. My question is, how to index all of that so I can search for Profile for some particular values of KeyItem::name and Pair::value. Ideally I'd like that kind of query to work: name:Andrew* AND occupation:Linux* Should I create custom Indexer and Searcher? Or I could use standard ones and just map KeyItem and Pair as Lucene components somehow?

    Read the article

  • How are you taking advantage of Multicore?

    - by tgamblin
    As someone in the world of HPC who came from the world of enterprise web development, I'm always curious to see how developers back in the "real world" are taking advantage of parallel computing. This is much more relevant now that all chips are going multicore, and it'll be even more relevant when there are thousands of cores on a chip instead of just a few. My questions are: How does this affect your software roadmap? I'm particularly interested in real stories about how multicore is affecting different software domains, so specify what kind of development you do in your answer (e.g. server side, client-side apps, scientific computing, etc). What are you doing with your existing code to take advantage of multicore machines, and what challenges have you faced? Are you using OpenMP, Erlang, Haskell, CUDA, TBB, UPC or something else? What do you plan to do as concurrency levels continue to increase, and how will you deal with hundreds or thousands of cores? If your domain doesn't easily benefit from parallel computation, then explaining why is interesting, too. Finally, I've framed this as a multicore question, but feel free to talk about other types of parallel computing. If you're porting part of your app to use MapReduce, or if MPI on large clusters is the paradigm for you, then definitely mention that, too. Update: If you do answer #5, mention whether you think things will change if there get to be more cores (100, 1000, etc) than you can feed with available memory bandwidth (seeing as how bandwidth is getting smaller and smaller per core). Can you still use the remaining cores for your application?

    Read the article

  • ForeignSecurityPrincipals with LDAP connection on Active Directory servers with trusted forest

    - by Killerwhile
    The context is the following : Two domains mutually trusted dc=dom1 dc=dom2 a group cn=group1,ou=someou,dc=dom1 with users inside : cn=user11,ou=anotherou,dc=dom1 cn=user12,ou=anotherou,dc=dom1 cn=user13,ou=anotherou,dc=dom1 cn=user21,ou=anotherou,dc=dom2 cn=user22,ou=anotherou,dc=dom2 cn=user23,ou=anotherou,dc=dom2 The questions : 1. Test user's credentials How can I do a ldap bind to test credentials for users of dom2 ? I tried to bind as usual but I cannot authenticate users of dom2, even if I connect in ldaps. Is there any trick ? Special permissions to set ? 2. Search and display users from the group. How can I retrieve the detailed informations about the users of dom1 and dom2 using LDAP(s) connection on the AD of dom1 ? I have an technical user which has right to browse both domain. I'm able to see 6 entries in the group with the following filter : (&(memberOf=cn=group1,ou=someou,dc=dom1)(|(objectClass=user)(objectClass=foreignSecurityPrincipal))) but the users from the other domain are seen as cn=...(some key)...,cn=foreignSecurityPrincipal,dc=dom1 Java hints would be better. Thanks a lot !

    Read the article

  • Amazon S3 and swfaddress

    - by justinbach
    I recently migrated a large AS3 site (lots of swfs, lots of flvs) to Amazon S3. Pretty much everything but HTML and JS files is being stored/served from Amazon, and it's working well. The only problem I'm having is that I built the site using SWFaddress (actually, via the Gaia framework which uses SWFaddress), and for some reason, SWFaddress is no longer updating the address bar correctly as users navigate from page to page. In other words, the URL persistently remains http://www.mysite.com, not http://www.mysite.com/#/section as would be the case were SWFaddress functioning correctly (and as it was functioning prior to the migration). Stranger yet, if I go to (e.g.) http://www.mysite.com/#/section directly, the deeplinking functions as you'd expect--I arrive directly at the correct section. However, navigating away from that section doesn't have any effect on the address bar, despite the fact that it should be dynamically updated. I've got a crossdomain.xml file set up on the site that allows access from all domains, so that's not the issue, and I don't know what else might be. Any ideas would be greatly appreciated! P.S. I integrated S3 by putting pretty much the entire site in an S3 bucket and then just changing the initial swfobject embed to point to the S3 instance of main.swf, passing in the S3 path as the "base" param to the embedded swf so that all dynamically loaded assets and swfs would also be sourced from s3. Dunno if that's related to the troubles I'm having.

    Read the article

  • Nhibernate get collection by ICriteria

    - by Andrew Kalashnikov
    Hello, colleagues. I've got a problem at getting my entity. MApping: <?xml version="1.0" encoding="utf-8" ?> <hibernate-mapping xmlns="urn:nhibernate-mapping-2.2" assembly="Clients.Core" namespace="Clients.Core.Domains"> <class name="Sales, Clients.Core" table='sales'> <id name="Id" unsaved-value="0"> <column name="id" not-null="true"/> <generator class="native"/> </id> <property name="Guid"> <column name="guid"/> </property> <set name="Accounts" table="sales_users" lazy="false"> <key column="sales_id" /> <element column="user_id" type="Int32" /> </set> </class> Domain: public class Sales : BaseDomain { ICollection<int> accounts = new List<int>(); public virtual ICollection<int> Accounts { get { return accounts; } set { accounts = value; } } public Sales() { } } I want get query such as SELECT * FROM sales s INNER JOIN sales_users su on su.sales_id=s.id WHERE su.user_id=:N How can i do this through ICriterion object? Thanks a lot.

    Read the article

< Previous Page | 84 85 86 87 88 89 90 91 92 93 94 95  | Next Page >