Search Results

Search found 31355 results on 1255 pages for 'google group'.

Page 320/1255 | < Previous Page | 316 317 318 319 320 321 322 323 324 325 326 327  | Next Page >

  • Someone tried to hack my Node.js server, need to understand a GET request in the logs

    - by Akay
    Alright, so I left my Node.js server alone for a while and came back to find some really interesting stuff in the logs. Apparently some moron from China or Poland tried to hack my server using directory traversal and what not, while it seems though he did not succeed I am unable understand few entries in the log. This is the output of a "hohup.out" file. The attack starts, apparently he is trying to find out some console entry in my server. All of which fail and return a 404. [90mGET /../../../../../../../../../../../ [31m500 [90m6ms - 2b[0m [90mGET /<script>alert(53416)</script> [33m404 [90m7ms[0m [90mGET / [32m200 [90m2ms - 240b[0m [90mGET / [32m200 [90m1ms - 240b[0m [90mGET / [32m200 [90m2ms - 240b[0m [90mGET /pz3yvy3lyzgja41w2sp [33m404 [90m1ms[0m [90mGET /stylesheets/style.css [33m404 [90m0ms[0m [90mGET /index.html [33m404 [90m1ms[0m [90mGET /index.htm [33m404 [90m0ms[0m [90mGET /default.html [33m404 [90m0ms[0m [90mGET /default.htm [33m404 [90m1ms[0m [90mGET /default.asp [33m404 [90m1ms[0m [90mGET /index.php [33m404 [90m0ms[0m [90mGET /default.php [33m404 [90m1ms[0m [90mGET /index.asp [33m404 [90m0ms[0m [90mGET /index.cgi [33m404 [90m0ms[0m [90mGET /index.jsp [33m404 [90m1ms[0m [90mGET /index.php3 [33m404 [90m0ms[0m [90mGET /index.pl [33m404 [90m0ms[0m [90mGET /default.jsp [33m404 [90m0ms[0m [90mGET /default.php3 [33m404 [90m0ms[0m [90mGET /index.html.en [33m404 [90m0ms[0m [90mGET /web.gif [33m404 [90m34ms[0m [90mGET /header.html [33m404 [90m1ms[0m [90mGET /homepage.nsf [33m404 [90m1ms[0m [90mGET /homepage.htm [33m404 [90m1ms[0m [90mGET /homepage.asp [33m404 [90m1ms[0m [90mGET /home.htm [33m404 [90m0ms[0m [90mGET /home.html [33m404 [90m1ms[0m [90mGET /home.asp [33m404 [90m1ms[0m [90mGET /login.asp [33m404 [90m0ms[0m [90mGET /login.html [33m404 [90m0ms[0m [90mGET /login.htm [33m404 [90m1ms[0m [90mGET /login.php [33m404 [90m0ms[0m [90mGET /index.cfm [33m404 [90m0ms[0m [90mGET /main.php [33m404 [90m1ms[0m [90mGET /main.asp [33m404 [90m1ms[0m [90mGET /main.htm [33m404 [90m1ms[0m [90mGET /main.html [33m404 [90m2ms[0m [90mGET /Welcome.html [33m404 [90m1ms[0m [90mGET /welcome.htm [33m404 [90m1ms[0m [90mGET /start.htm [33m404 [90m1ms[0m [90mGET /fleur.png [33m404 [90m0ms[0m [90mGET /level/99/ [33m404 [90m1ms[0m [90mGET /chl.css [33m404 [90m0ms[0m [90mGET /images/ [33m404 [90m0ms[0m [90mGET /robots.txt [33m404 [90m2ms[0m [90mGET /hb1/presign.asp [33m404 [90m1ms[0m [90mGET /NFuse/ASP/login.htm [33m404 [90m0ms[0m [90mGET /CCMAdmin/main.asp [33m404 [90m1ms[0m [90mGET /TiVoConnect?Command=QueryServer [33m404 [90m1ms[0m [90mGET /admin/images/rn_logo.gif [33m404 [90m1ms[0m [90mGET /vncviewer.jar [33m404 [90m1ms[0m [90mGET / [32m200 [90m2ms - 240b[0m [90mGET / [32m200 [90m2ms - 240b[0m [90mGET / [32m200 [90m7ms - 240b[0m [90mOPTIONS / [32m200 [90m1ms - 3b[0m [90mTRACE / [33m404 [90m0ms[0m [90mPROPFIND / [33m404 [90m0ms[0m [90mGET /\./ [33m404 [90m1ms[0m But here is when things start getting fishy. [90mGET http://www.google.com/ [32m200 [90m2ms - 240b[0m [90mGET http://www.google.com/ [32m200 [90m1ms - 240b[0m [90mGET http://www.google.com/ [32m200 [90m1ms - 240b[0m [90mGET /manager/html [33m404 [90m1ms[0m [90mGET /manager/html [33m404 [90m1ms[0m [90mGET http://www.google.com/ [32m200 [90m1ms - 240b[0m [90mGET / [32m200 [90m2ms - 240b[0m [90mGET / [32m200 [90m1ms - 240b[0m [90mGET /robots.txt [33m404 [90m1ms[0m [90mGET /manager/html [33m404 [90m1ms[0m [90mGET http://www.google.com/ [32m200 [90m1ms - 240b[0m [90mGET /manager/html [33m404 [90m1ms[0m [90mGET /manager/html [33m404 [90m1ms[0m [90mGET /manager/html [33m404 [90m0ms[0m [90mGET /manager/html [33m404 [90m1ms[0m [90mGET /manager/html [33m404 [90m3ms[0m [90mGET /manager/html [33m404 [90m0ms[0m [90mGET /manager/html [33m404 [90m1ms[0m [90mGET /manager/html [33m404 [90m1ms[0m [90mGET /manager/html [33m404 [90m0ms[0m [90mGET http://www.google.com/ [32m200 [90m1ms - 240b[0m [90mGET http://37.28.156.211/sprawdza.php [33m404 [90m1ms[0m [90mGET http://www.google.com/ [32m200 [90m1ms - 240b[0m [90mGET /manager/html [33m404 [90m1ms[0m [90mGET http://www.google.com/ [32m200 [90m2ms - 240b[0m [90mHEAD / [32m200 [90m1ms - 240b[0m [90mGET http://www.daydaydata.com/proxy.txt [33m404 [90m19ms[0m [90mHEAD / [32m200 [90m1ms - 240b[0m [90mGET /manager/html [33m404 [90m2ms[0m [90mGET / [32m200 [90m4ms - 240b[0m [90mGET http://www.google.pl/search?q=wp.pl [33m404 [90m1ms[0m [90mGET /manager/html [33m404 [90m0ms[0m [90mHEAD / [32m200 [90m2ms - 240b[0m [90mGET http://www.google.pl/search?q=onet.pl [33m404 [90m1ms[0m [90mHEAD / [32m200 [90m2ms - 240b[0m [90mGET http://www.google.com/ [32m200 [90m1ms - 240b[0m [90mGET http://www.google.pl/search?q=ostro%C5%82%C4%99ka [33m404 [90m1ms[0m [90mGET http://www.google.pl/search?q=google [33m404 [90m1ms[0m [90mGET /manager/html [33m404 [90m1ms[0m [90mGET http://www.google.com/ [32m200 [90m2ms - 240b[0m [90mHEAD / [32m200 [90m2ms - 240b[0m [90mGET /manager/html [33m404 [90m1ms[0m [90mGET /manager/html [33m404 [90m0ms[0m [90mGET / [32m200 [90m2ms - 240b[0m [90mGET http://www.baidu.com/ [32m200 [90m2ms - 240b[0m [90mGET /manager/html [33m404 [90m1ms[0m [90mGET /manager/html [33m404 [90m1ms[0m [90mPOST /api/login [32m200 [90m1ms - 28b[0m [90mGET /web-console/ServerInfo.jsp [33m404 [90m2ms[0m [90mGET /manager/html [33m404 [90m1ms[0m [90mGET http://www.google.com/ [32m200 [90m10ms - 240b[0m [90mGET http://www.google.com/ [32m200 [90m1ms - 240b[0m [90mGET / [32m200 [90m2ms - 240b[0m [90mGET /manager/html [33m404 [90m1ms[0m [90mGET http://proxyjudge.info [32m200 [90m2ms - 240b[0m [90mGET / [32m200 [90m2ms - 240b[0m [90mGET / [32m200 [90m1ms - 240b[0m [90mGET http://www.google.com/ [32m200 [90m3ms - 240b[0m [90mGET http://www.google.com/ [32m200 [90m3ms - 240b[0m [90mGET http://www.baidu.com/ [32m200 [90m1ms - 240b[0m [90mGET /manager/html [33m404 [90m0ms[0m [90mGET /manager/html [33m404 [90m1ms[0m [90mGET http://www.google.com/ [32m200 [90m2ms - 240b[0m [90mHEAD / [32m200 [90m1ms - 240b[0m [90mGET http://www.google.com/ [32m200 [90m1ms - 240b[0m [90mGET http://www.google.com/search?tbo=d&source=hp&num=1&btnG=Search&q=niceman [33m404 [90m2ms[0m So my questions are, how come my server is returning a "200" OK for root level domains? How did the hacker even manage to send a GET request to my server such that "http://www.google.com" shows up in the log while my server is simply an API that works on relative URLs such as "/api/login". And, while I looked up the OPTIONS, TRACE and PROPFIND HTTP requests that my server has logged it would be great if someone could explain what exactly was the hacker trying to achieve by using these verbs? Also what in the world does "[90m [32m [90m1ms - 240b[0m" mean? The "ms" makes sense, probably milliseconds for the request, rest I am unable to understand. Thank you!

    Read the article

  • jqGrid - Problems opening in jquery tabs (on Firefox and Google Chrome)

    - by Ben Hargreaves
    I have developed a very simple MVC app to test out trirand's jqGrid for MVC. The app opens a jqgrid in a jquery tab group and everything is ok with IE. However when I use Firefox jqgrid only opens occasionaly in the first tab (but not under any other tab), and in Chrome my jqgrids dont appear to open under any tab of the group. I'm a bit of an MVC newbie (and have only been testing jqgrid out for a few days), but I know my users will want to use different browsers. Trirand have not come back with any answer so wondered if anyone else had had a similar issue. I have really just implemented jqgrid as per the controllers and model in the sample application on the Trirand site, and then combined it with a straightforward jquery tab group. My MVC Details Page is as follows; <%@ Page Title="" Language="C#" MasterPageFile="~/Views/Shared/Site.Master" Inherits="System.Web.Mvc.ViewPage<PRAMSAPP.Models.Family>" %> <%@ Import Namespace="Trirand.Web.Mvc" %> <%@ Import Namespace="PRAMSAPP.Controllers" %> <%@ Import Namespace="PRAMSAPP.Models" %> <asp:Content ID="Content1" ContentPlaceHolderID="TitleContent" runat="server"> Details </asp:Content> <asp:Content ID="Content2" ContentPlaceHolderID="MainContent" runat="server"> <meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> <link rel="stylesheet" type="text/css" href="/scripts/jquery-ui-1.7.2.custom.css" /> <script type="text/javascript" src="/scripts/jquery-1.3.2.min.js"></script> <script type="text/javascript" src="/scripts/jquery-ui-1.7.2.custom.min.js"></script> <fieldset> <legend>Family</legend> <div class="display-field"><%= Html.Encode(Model.FamilyID) %></div> <div class="display-field"><%= Html.Encode(Model.FamilySurname) %></div> </fieldset> <div id="tabs"> <ul> <li> <%= Html.ActionLink("GridChildren", "GridDemo", new { controller = "Grid", id = Model.FamilyID })%> </li> <li> <%= Html.ActionLink("Children", "ShowFamiliesChildren", new { famid = Model.FamilyID, page = Page})%> </li> </ul> </div> <p> <%= Html.ActionLink("Edit", "Edit", new { id=Model.FamilyID }) %> | <%= Html.ActionLink("Back to List", "Index") %> </p> <script type="text/javascript"> $(function() { $('#tabs').tabs(); }); </script> </asp:Content> And My Controller page is as follows; <%@ Page Language="C#" Inherits="System.Web.Mvc.ViewPage<PRAMSAPP.Models.FamiliesChildrenJqGridModel>" %> <%@ Import Namespace="Trirand.Web.Mvc" %> <%@ Import Namespace="PRAMSAPP.Controllers" %> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" > <head id="Head1" runat="server"> <meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> <!-- The jQuery UI theme that will be used by the grid --> <link rel="stylesheet" type="text/css" media="screen" href="/Content/themes/redmond/jquery-ui-1.7.1.custom.css" /> <!-- The Css UI theme extension of jqGrid --> <link rel="stylesheet" type="text/css" media="screen" href="/Content/themes/ui.jqgrid.css" /> <!-- jQuery library is a prerequisite for jqGrid --> <script type="text/javascript" src="/Scripts/jquery-1.3.2.min.js"></script> <!-- language pack - MUST be included before the jqGrid javascript --> <script type="text/javascript" src="/Scripts/grid.locale-en.js"></script> <script type="text/javascript" src="/Scripts/jqgrid/jquery.jqGrid.min.js"></script> </head> <body> <div> <%= Html.Trirand().JQGrid(Model.FamiliesChildrenGrid, "JQGrid1") %> </div> </body>

    Read the article

  • Python RSS Feeder + Jquery RSS Reader

    - by Panther24
    I'm trying to implement an RSS Feeder in python using PyRSS2Gen and it created an XML file. Now in jQuery I'm using Google Ajax API's RSS reader and trying to read it. I get an error saying something is wrong. I'm unable to figure out the issue. I've checked some documentation but not able to get it. This is how my xml file looks: <rss version="2.0"> <channel> <title>Andrew's PyRSS2Gen feed</title> <link> http://www.dalkescientific.com/Python/PyRSS2Gen.html </link> <description> The latest news about PyRSS2Gen, a Python library for generating RSS2 feeds </description> <lastBuildDate>Fri, 26 Mar 2010 13:10:55 GMT</lastBuildDate> <generator>PyRSS2Gen-1.0.0</generator> <docs>http://blogs.law.harvard.edu/tech/rss</docs> <item> <title>PyRSS2Gen-0.0 released</title> <link> http://www.dalkescientific.com/news/030906-PyRSS2Gen.html </link> <description> Dalke Scientific today announced PyRSS2Gen-0.0, a library for generating RSS feeds for Python. </description> <guid isPermaLink="true"> http://www.dalkescientific.com/news/030906-PyRSS2Gen.html </guid> <pubDate>Sat, 06 Sep 2003 21:31:00 GMT</pubDate> </item> <item> <title>Thoughts on RSS feeds for bioinformatics</title> <link> http://www.dalkescientific.com/writings/diary/archive/2003/09/06/RSS.html </link> <description> One of the reasons I wrote PyRSS2Gen was to experiment with RSS for data collection in bioinformatics. Last year I came across... </description> <guid isPermaLink="true"> http://www.dalkescientific.com/writings/diary/archive/2003/09/06/RSS.html </guid> <pubDate>Sat, 06 Sep 2003 21:49:00 GMT</pubDate> </item> </channel> </rss> And my JS looks like this <!-- ++Begin Dynamic Feed Wizard Generated Code++ --> <!-- // Created with a Google AJAX Search and Feed Wizard // http://code.google.com/apis/ajaxsearch/wizards.html --> <!-- // The Following div element will end up holding the actual feed control. // You can place this anywhere on your page. --> <!-- Google Ajax Api --> <script src="../js/jsapi.js" type="text/javascript"></script> <!-- Dynamic Feed Control and Stylesheet --> <script src="../js/gfdynamicfeedcontrol.js" type="text/javascript"></script> <link rel="stylesheet" type="text/css" href="../css/gfdynamicfeedcontrol.css"> <script type="text/javascript"> function LoadDynamicFeedControl() { var feeds = [ {title: 'Demo', url: 'wcarss.xml' }]; var options = { stacked : false, horizontal : false, title : "" } new GFdynamicFeedControl(feeds, 'feed-control', options); } // Load the feeds API and set the onload callback. google.load('feeds', '1'); google.setOnLoadCallback(LoadDynamicFeedControl); </script> <!-- ++End Dynamic Feed Control Wizard Generated Code++ --> Can some help me and tell me what I'm doing wrong here? NOTE: Does the XML file have to be deployed? Cause currently its in my local machine and I'm trying to just call it.

    Read the article

  • Toggle KML Layers, Infowindow isnt working

    - by user1653126
    I have this code, i am trying to toggle some kml layers. The problem is that when i click the marker it isn't showing the infowindow. Maybe someone can show me my error. Thanks. Here is the CODE <!DOCTYPE html> <html> <head> <meta name="viewport" content="initial-scale=1.0, user-scalable=no" /> <style type="text/css"> html { height: 100% } body { height: 100%; margin: 0; padding: 0 } #map_canvas { height: 100% } </style> <script type="text/javascript" src="http://maps.googleapis.com/maps/api/js?key=IzaSyAvj6XNNPO8YPFbkVR8KcTl5LK1ByRHG1E&sensor=false"> </script> <script type="text/javascript"> var map; // lets define some vars to make things easier later var kml = { a: { name: "Productores", url: "https://maps.google.hn/maps/ms?authuser=0&vps=2&hl=es&ie=UTF8&msa=0&output=kml&msid=200984447026903306654.0004c934a224eca7c3ad4" } // keep adding more if you like }; // initialize our goo function initializeMap() { var options = { center: new google.maps.LatLng(13.324182,-87.080071), zoom: 8, mapTypeId: google.maps.MapTypeId.TERRAIN } map = new google.maps.Map(document.getElementById("map_canvas"), options); createTogglers(); }; google.maps.event.addDomListener(window, 'load', initializeMap); // the important function... kml[id].xxxxx refers back to the top function toggleKML(checked, id) { if (checked) { var layer = new google.maps.KmlLayer(kml[id].url, { preserveViewport: true, suppressInfoWindows: true }); // store kml as obj kml[id].obj = layer; kml[id].obj.setMap(map); } else { kml[id].obj.setMap(null); delete kml[id].obj; } }; // create the controls dynamically because it's easier, really function createTogglers() { var html = "<form><ul>"; for (var prop in kml) { html += "<li id=\"selector-" + prop + "\"><input type='checkbox' id='" + prop + "'" + " onclick='highlight(this,\"selector-" + prop + "\"); toggleKML(this.checked, this.id)' \/>" + kml[prop].name + "<\/li>"; } html += "<li class='control'><a href='#' onclick='removeAll();return false;'>" + "Remove all layers<\/a><\/li>" + "<\/ul><\/form>"; document.getElementById("toggle_box").innerHTML = html; }; function removeAll() { for (var prop in kml) { if (kml[prop].obj) { kml[prop].obj.setMap(null); delete kml[prop].obj; } } }; // Append Class on Select function highlight(box, listitem) { var selected = 'selected'; var normal = 'normal'; document.getElementById(listitem).className = (box.checked ? selected: normal); }; </script> <style type="text/css"> .selected { font-weight: bold; } </style> </head> <body> <div id="map_canvas" style="width: 50%; height: 200px;"></div> <div id="toggle_box" style="position: absolute; top: 200px; right: 1000px; padding: 20px; background: #fff; z-index: 5; "></div> </body> </html>

    Read the article

  • NINTENDO, EDCON and ALLEGIS GROUP @ Oracle Open World 2012 Conference Session (CON9418): The Business Case for Oracle Exalogic: A Customer Perspective

    - by Sanjeev Sharma
     Are you looking to deliver breakthrough performance for packaged and custom  applications? For many front-office applications such as Oracle WebCenter Sites, Oracle Transportation Management, and Oracle’s ATG and Siebel product families,  improved  performance leads directly to greater revenue or cost savings from the business - a  compelling  proposition. For back-office applications, improved performance has tangible benefits  in terms of  footprint reductions. For all applications, Oracle Exalogic and Oracle Exadata provide an engineered solution that provides shorter time to value and lower operational costs.  Edcon is a leading clothing, footwear and textiles (CFT) retailing group in southern Africa trading through a range of retail formats. The Company has grown from opening it's first store in 1929, to ten retail brands trading in over 1000 stores in South Africa, Botswana, Namibia, Swaziland and Lesotho. Edcon's retail business has, through recent acquisitions, added top stationery and houseware brands as well as general merchandise to its CFT portfolio. Edcon was looking to consolidate their existing middleware components (Weblogic and Oracle SOA) and retail applications (Retek, Siebel and E-Business Suite) on a common platform and turned to Oracle Exalogic. With Oracle Exalogic, Edcon is able to derive significant HW CAPEX savings, improve response-time of core business applications and mitigate operating risk. Hear senior business leaders from Nintendo, Edcon and Allegis Group discuss how the business value of  leveraging Oracle Exalogic at the following conference session at Oracle Open World 2012: Session:  CON9418 - The Business Case for Oracle Exalogic: A Customer PerspectiveDate: Monday, 1 Oct, 2012Time: 1:45 pm - 2:45 pm (PST)Venue: Moscone South (306)

    Read the article

  • Is there such thing like a "refactoring/maintainability group" role in software companies?

    - by dukeofgaming
    So, I work in a company that does embedded software development, other groups focus in the core development of different products' software and my department (which is in another geographical location) which is located at the factory has to deal with software development as well, but across all products, so that we can also fix things quicker when the lines go down due to software problems with the product. In other words, we are generalists while other groups specialize on each product. Thing is, it is kind of hard to get involved in core development when you are distributed geographically (well, I know it really isn't that hard, but there might be unintended cultural/political barriers when it comes to the discipline of collaborating remotely). So I figured that, since we are currently just putting fires out and somewhat being idle/sub-utilized (even though we are a new department, or maybe that is the reason), I thought that a good role for us could be detecting areas of opportunity of refactoring and rearchitecting code and all other implementations that might have to do with stewarding maintainability and modularity. Other groups aren't focused on this because they don't have the time and they have aggressive deadlines, which damage the quality of the code (eternal story of software projects) The thing is that I want my group/department to be recognized by management and other groups with this role officially, and I'm having trouble to come up with a good definition/identity of our group for this matter. So my question is: is this role something that already exists?, or am I the first one to make something like this up?

    Read the article

  • Python: NameError: 'self' is not defined

    - by Rosarch
    I must be doing something stupid. I'm running this in Google App Engine: def render(self, template_name, template_data): path = os.path.join(os.path.dirname(__file__), 'static/templates/%s.html' % template_name) self.response.out.write(template.render(path, template_data)) This gives an error: Traceback (most recent call last): File "C:\Program Files\Google\google_appengine\google\appengine\tools\dev_appserver.py", line 3192, in _HandleRequest self._Dispatch(dispatcher, self.rfile, outfile, env_dict) File "C:\Program Files\Google\google_appengine\google\appengine\tools\dev_appserver.py", line 3135, in _Dispatch base_env_dict=env_dict) File "C:\Program Files\Google\google_appengine\google\appengine\tools\dev_appserver.py", line 516, in Dispatch base_env_dict=base_env_dict) File "C:\Program Files\Google\google_appengine\google\appengine\tools\dev_appserver.py", line 2394, in Dispatch self._module_dict) File "C:\Program Files\Google\google_appengine\google\appengine\tools\dev_appserver.py", line 2304, in ExecuteCGI reset_modules = exec_script(handler_path, cgi_path, hook) File "C:\Program Files\Google\google_appengine\google\appengine\tools\dev_appserver.py", line 2200, in ExecuteOrImportScript exec module_code in script_module.__dict__ File "main.py", line 22, in <module> class MainHandler(webapp.RequestHandler): File "main.py", line 38, in MainHandler self.writeOut(template.render(path, template_data)) NameError: name 'self' is not defined What am I doing wrong?

    Read the article

  • LLBLGen Pro feature highlights: grouping model elements

    - by FransBouma
    (This post is part of a series of posts about features of the LLBLGen Pro system) When working with an entity model which has more than a few entities, it's often convenient to be able to group entities together if they belong to a semantic sub-model. For example, if your entity model has several entities which are about 'security', it would be practical to group them together under the 'security' moniker. This way, you could easily find them back, yet they can be left inside the complete entity model altogether so their relationships with entities outside the group are kept. In other situations your domain consists of semi-separate entity models which all target tables/views which are located in the same database. It then might be convenient to have a single project to manage the complete target database, yet have the entity models separate of each other and have them result in separate code bases. LLBLGen Pro can do both for you. This blog post will illustrate both situations. The feature is called group usage and is controllable through the project settings. This setting is supported on all supported O/R mapper frameworks. Situation one: grouping entities in a single model. This situation is common for entity models which are dense, so many relationships exist between all sub-models: you can't split them up easily into separate models (nor do you likely want to), however it's convenient to have them grouped together into groups inside the entity model at the project level. A typical example for this is the AdventureWorks example database for SQL Server. This database, which is a single catalog, has for each sub-group a schema, however most of these schemas are tightly connected with each other: adding all schemas together will give a model with entities which indirectly are related to all other entities. LLBLGen Pro's default setting for group usage is AsVisualGroupingMechanism which is what this situation is all about: we group the elements for visual purposes, it has no real meaning for the model nor the code generated. Let's reverse engineer AdventureWorks to an entity model. By default, LLBLGen Pro uses the target schema an element is in which is being reverse engineered, as the group it will be in. This is convenient if you already have categorized tables/views in schemas, like which is the case in AdventureWorks. Of course this can be switched off, or corrected on the fly. When reverse engineering, we'll walk through a wizard which will guide us with the selection of the elements which relational model data should be retrieved, which we can later on use to reverse engineer to an entity model. The first step after specifying which database server connect to is to select these elements. below we can see the AdventureWorks catalog as well as the different schemas it contains. We'll include all of them. After the wizard completes, we have all relational model data nicely in our catalog data, with schemas. So let's reverse engineer entities from the tables in these schemas. We select in the catalog explorer the schemas 'HumanResources', 'Person', 'Production', 'Purchasing' and 'Sales', then right-click one of them and from the context menu, we select Reverse engineer Tables to Entity Definitions.... This will bring up the dialog below. We check all checkboxes in one go by checking the checkbox at the top to mark them all to be added to the project. As you can see LLBLGen Pro has already filled in the group name based on the schema name, as this is the default and we didn't change the setting. If you want, you can select multiple rows at once and set the group name to something else using the controls on the dialog. We're fine with the group names chosen so we'll simply click Add to Project. This gives the following result:   (I collapsed the other groups to keep the picture small ;)). As you can see, the entities are now grouped. Just to see how dense this model is, I've expanded the relationships of Employee: As you can see, it has relationships with entities from three other groups than HumanResources. It's not doable to cut up this project into sub-models without duplicating the Employee entity in all those groups, so this model is better suited to be used as a single model resulting in a single code base, however it benefits greatly from having its entities grouped into separate groups at the project level, to make work done on the model easier. Now let's look at another situation, namely where we work with a single database while we want to have multiple models and for each model a separate code base. Situation two: grouping entities in separate models within the same project. To get rid of the entities to see the second situation in action, simply undo the reverse engineering action in the project. We still have the AdventureWorks relational model data in the catalog. To switch LLBLGen Pro to see each group in the project as a separate project, open the Project Settings, navigate to General and set Group usage to AsSeparateProjects. In the catalog explorer, select Person and Production, right-click them and select again Reverse engineer Tables to Entities.... Again check the checkbox at the top to mark all entities to be added and click Add to Project. We get two groups, as expected, however this time the groups are seen as separate projects. This means that the validation logic inside LLBLGen Pro will see it as an error if there's e.g. a relationship or an inheritance edge linking two groups together, as that would lead to a cyclic reference in the code bases. To see this variant of the grouping feature, seeing the groups as separate projects, in action, we'll generate code from the project with the two groups we just created: select from the main menu: Project -> Generate Source-code... (or press F7 ;)). In the dialog popping up, select the target .NET framework you want to use, the template preset, fill in a destination folder and click Start Generator (normal). This will start the code generator process. As expected the code generator has simply generated two code bases, one for Person and one for Production: The group name is used inside the namespace for the different elements. This allows you to add both code bases to a single solution and use them together in a different project without problems. Below is a snippet from the code file of a generated entity class. //... using System.Xml.Serialization; using AdventureWorks.Person; using AdventureWorks.Person.HelperClasses; using AdventureWorks.Person.FactoryClasses; using AdventureWorks.Person.RelationClasses; using SD.LLBLGen.Pro.ORMSupportClasses; namespace AdventureWorks.Person.EntityClasses { //... /// <summary>Entity class which represents the entity 'Address'.<br/><br/></summary> [Serializable] public partial class AddressEntity : CommonEntityBase //... The advantage of this is that you can have two code bases and work with them separately, yet have a single target database and maintain everything in a single location. If you decide to move to a single code base, you can do so with a change of one setting. It's also useful if you want to keep the groups as separate models (and code bases) yet want to add relationships to elements from another group using a copy of the entity: you can simply reverse engineer the target table to a new entity into a different group, effectively making a copy of the entity. As there's a single target database, changes made to that database are reflected in both models which makes maintenance easier than when you'd have a separate project for each group, with its own relational model data. Conclusion LLBLGen Pro offers a flexible way to work with entities in sub-models and control how the sub-models end up in the generated code.

    Read the article

  • Fun with Aggregates

    - by Paul White
    There are interesting things to be learned from even the simplest queries.  For example, imagine you are given the task of writing a query to list AdventureWorks product names where the product has at least one entry in the transaction history table, but fewer than ten. One possible query to meet that specification is: SELECT p.Name FROM Production.Product AS p JOIN Production.TransactionHistory AS th ON p.ProductID = th.ProductID GROUP BY p.ProductID, p.Name HAVING COUNT_BIG(*) < 10; That query correctly returns 23 rows (execution plan and data sample shown below): The execution plan looks a bit different from the written form of the query: the base tables are accessed in reverse order, and the aggregation is performed before the join.  The general idea is to read all rows from the history table, compute the count of rows grouped by ProductID, merge join the results to the Product table on ProductID, and finally filter to only return rows where the count is less than ten. This ‘fully-optimized’ plan has an estimated cost of around 0.33 units.  The reason for the quote marks there is that this plan is not quite as optimal as it could be – surely it would make sense to push the Filter down past the join too?  To answer that, let’s look at some other ways to formulate this query.  This being SQL, there are any number of ways to write logically-equivalent query specifications, so we’ll just look at a couple of interesting ones.  The first query is an attempt to reverse-engineer T-SQL from the optimized query plan shown above.  It joins the result of pre-aggregating the history table to the Product table before filtering: SELECT p.Name FROM ( SELECT th.ProductID, cnt = COUNT_BIG(*) FROM Production.TransactionHistory AS th GROUP BY th.ProductID ) AS q1 JOIN Production.Product AS p ON p.ProductID = q1.ProductID WHERE q1.cnt < 10; Perhaps a little surprisingly, we get a slightly different execution plan: The results are the same (23 rows) but this time the Filter is pushed below the join!  The optimizer chooses nested loops for the join, because the cardinality estimate for rows passing the Filter is a bit low (estimate 1 versus 23 actual), though you can force a merge join with a hint and the Filter still appears below the join.  In yet another variation, the < 10 predicate can be ‘manually pushed’ by specifying it in a HAVING clause in the “q1” sub-query instead of in the WHERE clause as written above. The reason this predicate can be pushed past the join in this query form, but not in the original formulation is simply an optimizer limitation – it does make efforts (primarily during the simplification phase) to encourage logically-equivalent query specifications to produce the same execution plan, but the implementation is not completely comprehensive. Moving on to a second example, the following query specification results from phrasing the requirement as “list the products where there exists fewer than ten correlated rows in the history table”: SELECT p.Name FROM Production.Product AS p WHERE EXISTS ( SELECT * FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID HAVING COUNT_BIG(*) < 10 ); Unfortunately, this query produces an incorrect result (86 rows): The problem is that it lists products with no history rows, though the reasons are interesting.  The COUNT_BIG(*) in the EXISTS clause is a scalar aggregate (meaning there is no GROUP BY clause) and scalar aggregates always produce a value, even when the input is an empty set.  In the case of the COUNT aggregate, the result of aggregating the empty set is zero (the other standard aggregates produce a NULL).  To make the point really clear, let’s look at product 709, which happens to be one for which no history rows exist: -- Scalar aggregate SELECT COUNT_BIG(*) FROM Production.TransactionHistory AS th WHERE th.ProductID = 709;   -- Vector aggregate SELECT COUNT_BIG(*) FROM Production.TransactionHistory AS th WHERE th.ProductID = 709 GROUP BY th.ProductID; The estimated execution plans for these two statements are almost identical: You might expect the Stream Aggregate to have a Group By for the second statement, but this is not the case.  The query includes an equality comparison to a constant value (709), so all qualified rows are guaranteed to have the same value for ProductID and the Group By is optimized away. In fact there are some minor differences between the two plans (the first is auto-parameterized and qualifies for trivial plan, whereas the second is not auto-parameterized and requires cost-based optimization), but there is nothing to indicate that one is a scalar aggregate and the other is a vector aggregate.  This is something I would like to see exposed in show plan so I suggested it on Connect.  Anyway, the results of running the two queries show the difference at runtime: The scalar aggregate (no GROUP BY) returns a result of zero, whereas the vector aggregate (with a GROUP BY clause) returns nothing at all.  Returning to our EXISTS query, we could ‘fix’ it by changing the HAVING clause to reject rows where the scalar aggregate returns zero: SELECT p.Name FROM Production.Product AS p WHERE EXISTS ( SELECT * FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID HAVING COUNT_BIG(*) BETWEEN 1 AND 9 ); The query now returns the correct 23 rows: Unfortunately, the execution plan is less efficient now – it has an estimated cost of 0.78 compared to 0.33 for the earlier plans.  Let’s try adding a redundant GROUP BY instead of changing the HAVING clause: SELECT p.Name FROM Production.Product AS p WHERE EXISTS ( SELECT * FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID GROUP BY th.ProductID HAVING COUNT_BIG(*) < 10 ); Not only do we now get correct results (23 rows), this is the execution plan: I like to compare that plan to quantum physics: if you don’t find it shocking, you haven’t understood it properly :)  The simple addition of a redundant GROUP BY has resulted in the EXISTS form of the query being transformed into exactly the same optimal plan we found earlier.  What’s more, in SQL Server 2008 and later, we can replace the odd-looking GROUP BY with an explicit GROUP BY on the empty set: SELECT p.Name FROM Production.Product AS p WHERE EXISTS ( SELECT * FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID GROUP BY () HAVING COUNT_BIG(*) < 10 ); I offer that as an alternative because some people find it more intuitive (and it perhaps has more geek value too).  Whichever way you prefer, it’s rather satisfying to note that the result of the sub-query does not exist for a particular correlated value where a vector aggregate is used (the scalar COUNT aggregate always returns a value, even if zero, so it always ‘EXISTS’ regardless which ProductID is logically being evaluated). The following query forms also produce the optimal plan and correct results, so long as a vector aggregate is used (you can probably find more equivalent query forms): WHERE Clause SELECT p.Name FROM Production.Product AS p WHERE ( SELECT COUNT_BIG(*) FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID GROUP BY () ) < 10; APPLY SELECT p.Name FROM Production.Product AS p CROSS APPLY ( SELECT NULL FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID GROUP BY () HAVING COUNT_BIG(*) < 10 ) AS ca (dummy); FROM Clause SELECT q1.Name FROM ( SELECT p.Name, cnt = ( SELECT COUNT_BIG(*) FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID GROUP BY () ) FROM Production.Product AS p ) AS q1 WHERE q1.cnt < 10; This last example uses SUM(1) instead of COUNT and does not require a vector aggregate…you should be able to work out why :) SELECT q.Name FROM ( SELECT p.Name, cnt = ( SELECT SUM(1) FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID ) FROM Production.Product AS p ) AS q WHERE q.cnt < 10; The semantics of SQL aggregates are rather odd in places.  It definitely pays to get to know the rules, and to be careful to check whether your queries are using scalar or vector aggregates.  As we have seen, query plans do not show in which ‘mode’ an aggregate is running and getting it wrong can cause poor performance, wrong results, or both. © 2012 Paul White Twitter: @SQL_Kiwi email: [email protected]

    Read the article

  • Fun with Aggregates

    - by Paul White
    There are interesting things to be learned from even the simplest queries.  For example, imagine you are given the task of writing a query to list AdventureWorks product names where the product has at least one entry in the transaction history table, but fewer than ten. One possible query to meet that specification is: SELECT p.Name FROM Production.Product AS p JOIN Production.TransactionHistory AS th ON p.ProductID = th.ProductID GROUP BY p.ProductID, p.Name HAVING COUNT_BIG(*) < 10; That query correctly returns 23 rows (execution plan and data sample shown below): The execution plan looks a bit different from the written form of the query: the base tables are accessed in reverse order, and the aggregation is performed before the join.  The general idea is to read all rows from the history table, compute the count of rows grouped by ProductID, merge join the results to the Product table on ProductID, and finally filter to only return rows where the count is less than ten. This ‘fully-optimized’ plan has an estimated cost of around 0.33 units.  The reason for the quote marks there is that this plan is not quite as optimal as it could be – surely it would make sense to push the Filter down past the join too?  To answer that, let’s look at some other ways to formulate this query.  This being SQL, there are any number of ways to write logically-equivalent query specifications, so we’ll just look at a couple of interesting ones.  The first query is an attempt to reverse-engineer T-SQL from the optimized query plan shown above.  It joins the result of pre-aggregating the history table to the Product table before filtering: SELECT p.Name FROM ( SELECT th.ProductID, cnt = COUNT_BIG(*) FROM Production.TransactionHistory AS th GROUP BY th.ProductID ) AS q1 JOIN Production.Product AS p ON p.ProductID = q1.ProductID WHERE q1.cnt < 10; Perhaps a little surprisingly, we get a slightly different execution plan: The results are the same (23 rows) but this time the Filter is pushed below the join!  The optimizer chooses nested loops for the join, because the cardinality estimate for rows passing the Filter is a bit low (estimate 1 versus 23 actual), though you can force a merge join with a hint and the Filter still appears below the join.  In yet another variation, the < 10 predicate can be ‘manually pushed’ by specifying it in a HAVING clause in the “q1” sub-query instead of in the WHERE clause as written above. The reason this predicate can be pushed past the join in this query form, but not in the original formulation is simply an optimizer limitation – it does make efforts (primarily during the simplification phase) to encourage logically-equivalent query specifications to produce the same execution plan, but the implementation is not completely comprehensive. Moving on to a second example, the following query specification results from phrasing the requirement as “list the products where there exists fewer than ten correlated rows in the history table”: SELECT p.Name FROM Production.Product AS p WHERE EXISTS ( SELECT * FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID HAVING COUNT_BIG(*) < 10 ); Unfortunately, this query produces an incorrect result (86 rows): The problem is that it lists products with no history rows, though the reasons are interesting.  The COUNT_BIG(*) in the EXISTS clause is a scalar aggregate (meaning there is no GROUP BY clause) and scalar aggregates always produce a value, even when the input is an empty set.  In the case of the COUNT aggregate, the result of aggregating the empty set is zero (the other standard aggregates produce a NULL).  To make the point really clear, let’s look at product 709, which happens to be one for which no history rows exist: -- Scalar aggregate SELECT COUNT_BIG(*) FROM Production.TransactionHistory AS th WHERE th.ProductID = 709;   -- Vector aggregate SELECT COUNT_BIG(*) FROM Production.TransactionHistory AS th WHERE th.ProductID = 709 GROUP BY th.ProductID; The estimated execution plans for these two statements are almost identical: You might expect the Stream Aggregate to have a Group By for the second statement, but this is not the case.  The query includes an equality comparison to a constant value (709), so all qualified rows are guaranteed to have the same value for ProductID and the Group By is optimized away. In fact there are some minor differences between the two plans (the first is auto-parameterized and qualifies for trivial plan, whereas the second is not auto-parameterized and requires cost-based optimization), but there is nothing to indicate that one is a scalar aggregate and the other is a vector aggregate.  This is something I would like to see exposed in show plan so I suggested it on Connect.  Anyway, the results of running the two queries show the difference at runtime: The scalar aggregate (no GROUP BY) returns a result of zero, whereas the vector aggregate (with a GROUP BY clause) returns nothing at all.  Returning to our EXISTS query, we could ‘fix’ it by changing the HAVING clause to reject rows where the scalar aggregate returns zero: SELECT p.Name FROM Production.Product AS p WHERE EXISTS ( SELECT * FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID HAVING COUNT_BIG(*) BETWEEN 1 AND 9 ); The query now returns the correct 23 rows: Unfortunately, the execution plan is less efficient now – it has an estimated cost of 0.78 compared to 0.33 for the earlier plans.  Let’s try adding a redundant GROUP BY instead of changing the HAVING clause: SELECT p.Name FROM Production.Product AS p WHERE EXISTS ( SELECT * FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID GROUP BY th.ProductID HAVING COUNT_BIG(*) < 10 ); Not only do we now get correct results (23 rows), this is the execution plan: I like to compare that plan to quantum physics: if you don’t find it shocking, you haven’t understood it properly :)  The simple addition of a redundant GROUP BY has resulted in the EXISTS form of the query being transformed into exactly the same optimal plan we found earlier.  What’s more, in SQL Server 2008 and later, we can replace the odd-looking GROUP BY with an explicit GROUP BY on the empty set: SELECT p.Name FROM Production.Product AS p WHERE EXISTS ( SELECT * FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID GROUP BY () HAVING COUNT_BIG(*) < 10 ); I offer that as an alternative because some people find it more intuitive (and it perhaps has more geek value too).  Whichever way you prefer, it’s rather satisfying to note that the result of the sub-query does not exist for a particular correlated value where a vector aggregate is used (the scalar COUNT aggregate always returns a value, even if zero, so it always ‘EXISTS’ regardless which ProductID is logically being evaluated). The following query forms also produce the optimal plan and correct results, so long as a vector aggregate is used (you can probably find more equivalent query forms): WHERE Clause SELECT p.Name FROM Production.Product AS p WHERE ( SELECT COUNT_BIG(*) FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID GROUP BY () ) < 10; APPLY SELECT p.Name FROM Production.Product AS p CROSS APPLY ( SELECT NULL FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID GROUP BY () HAVING COUNT_BIG(*) < 10 ) AS ca (dummy); FROM Clause SELECT q1.Name FROM ( SELECT p.Name, cnt = ( SELECT COUNT_BIG(*) FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID GROUP BY () ) FROM Production.Product AS p ) AS q1 WHERE q1.cnt < 10; This last example uses SUM(1) instead of COUNT and does not require a vector aggregate…you should be able to work out why :) SELECT q.Name FROM ( SELECT p.Name, cnt = ( SELECT SUM(1) FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID ) FROM Production.Product AS p ) AS q WHERE q.cnt < 10; The semantics of SQL aggregates are rather odd in places.  It definitely pays to get to know the rules, and to be careful to check whether your queries are using scalar or vector aggregates.  As we have seen, query plans do not show in which ‘mode’ an aggregate is running and getting it wrong can cause poor performance, wrong results, or both. © 2012 Paul White Twitter: @SQL_Kiwi email: [email protected]

    Read the article

  • jQuery AJAX (google PageRank)

    - by RobertPitt
    Hey guys, I need a little help.. iv'e been developing a Jqery plug-in to get the page ranks of urls on a website using XHR, The problem is when requesting the rank from google servers the page is returned no content, but if i use an inspector and get the url that was requests and go to it via my browser the pageranks are shown. so it must be something with headers but its just got me puzzled. Heres some source code but i have removed several aspects that are not needed to review. pagerank.plugin.js ( $.fn.PageRank = function(callback) { var _library = new Object(); //Creat the system library _library.parseUrl = function(a) { var b = {}; var a = a || ''; /* * parse the url to extract its parts */ if (a = a.match(/((s?ftp|https?):\/\/){1}([^\/:]+)?(:([0-9]+))?([^\?#]+)?(\?([^#]+))?(#(.+))?/)) { b.scheme = a[2] ? a[2] : "http"; b.host = a[3] ? a[3] : null; b.port = a[5] ? a[5] : null; b.path = a[6] ? a[6] : null; b.args = a[8] ? a[8] : null; b.anchor = a[10] ? a[10] : null } return b } _library.ValidUrl = function(url) { var b = true; return b = url.host === undefined ? false : url.scheme != "http" && url.scheme != "https" ? false : url.host == "localhost" ? false : true } _library.toHex = function(a){ return (a < 16 ? "0" : "") + a.toString(16) } _library.hexEncodeU32 = function(a) { } _library.generateHash = function(a) { for (var b = 16909125, c = 0; c < a.length; c++) { } return _library.hexEncodeU32(b) } var CheckPageRank = function(domain,_call) { var hash = _library.generateHash(domain); $.ajax( { url: 'http://www.google.com/search?client=navclient-auto&ch=8'+hash+'&features=Rank&q=info:' + escape(domain), async: true, dataType: 'html', ifModified:true, contentType:'', type:'GET', beforeSend:function(xhr) { xhr.setRequestHeader('Referer','http://google.com/'); //Set Referer }, success: function(content,textS,xhr){ var d = xhr.responseText.substr(9, 2).replace(/\s$/, ""); if (d == "" || isNaN(d * 1)) d = "0"; _call(d); } }); } //Return the callback $(this).each(function(){ urlsegments = _library.parseUrl($(this).attr('href')) if(_library.ValidUrl(urlsegments)) { CheckPageRank(urlsegments.host,function(rank){ alert(rank) callback(rank); }); } }); return this; //Dont break any chain. } )(jQuery); Index.html (example) <html> <head> <script type="text/javascript" src="http://ajax.googleapis.com/ajax/libs/jquery/1.4.1/jquery.min.js"></script> <script type="text/javascript" src="pagerank.plugin.js"></script> <script type="text/javascript"> $(document).ready(function() { $('a').PageRank(function(pr){ alert(pr); }) }); </script> </head> <body> <a href="http://facebook.com">a</a> <a href="http://twitter.com">a</a> <div></div> </body> </html> i just cant understand why its doing this.

    Read the article

  • What Ranking Factors Are Used For International Search?

    - by Itai
    Google.com vs Google.ca vs Google.co.uk (etc) all rank their results differently. The intention is to return more locally-relevant content. What factors, other than the ones below, are used to determine local relevancy? I already know the TLD (.com, .ca, etc) and likely the server IP address is used but there has to be more as this would not explain some search results I noticed this week. Particularly, I see a US-based site ranking #3 for some keywords on Google.com, ranking #5 on Google.ca and not ranking within the first pages on Google.co.uk. On Google.com it outranks a Australian site which outranks it on Google.ca. The site itself is relevant for all English-speaking locations and it being outranked by sites from different regions on different Google TLDs (but not ones from the same region as the TLD).

    Read the article

  • imapsync - Authentication failed

    - by Touff
    I've deployed many Google Apps accounts and have used imapsync a number of times to migrate accounts to Google Apps. This time however, no matter what I try imapsync refuses to work claiming my credentials are incorrect - I've checked them time and time again and they are 100% correct. On Ubuntu 12, built from source, my command is: imapsync --host1 myserver.com --user1 [email protected] --password1 mypassword1 -ssl1 --host2 imap.gmail.com --user2 [email protected] --password2 mypassword2 -ssl2 -authmech2 PLAIN Full output from the command: get options: [1] PID is 21316 $RCSfile: imapsync,v $ $Revision: 1.592 $ $Date: With perl 5.14.2 Mail::IMAPClient 3.35 Command line used: /usr/bin/imapsync --debug --host1 myserver.com --user1 [email protected] --password1 mypassword1 -ssl1 --host2 imap.gmail.com --user2 [email protected] --password2 mypassword2 -ssl2 -authmech2 PLAIN Temp directory is /tmp PID file is /tmp/imapsync.pid Modules version list: Mail::IMAPClient 3.35 IO::Socket 1.32 IO::Socket::IP ? IO::Socket::INET 1.31 IO::Socket::SSL 1.53 Net::SSLeay 1.42 Digest::MD5 2.51 Digest::HMAC_MD5 1.01 Digest::HMAC_SHA1 1.03 Term::ReadKey 2.30 Authen::NTLM 1.09 File::Spec 3.33 Time::HiRes 1.972101 URI::Escape 3.31 Data::Uniqid 0.12 IMAPClient 3.35 Info: turned ON syncinternaldates, will set the internal dates (arrival dates) on host2 same as host1. Info: will try to use LOGIN authentication on host1 Info: will try to use PLAIN authentication on host2 Info: imap connexions timeout is 120 seconds Host1: IMAP server [SERVER1] port [993] user [USER1] Host2: IMAP server [imap.gmail.com] port [993] user [USER2] Host1: * OK [CAPABILITY IMAP4rev1 LITERAL+ SASL-IR LOGIN-REFERRALS ID ENABLE AUTH=PLAIN AUTH=LOGIN] Dovecot ready. Host1: SERVER1 says it has CAPABILITY for AUTHENTICATE LOGIN Host1: success login on [SERVER1] with user [USER1] auth [LOGIN] Host2: * OK Gimap ready for requests from MY-VPS Host2: imap.gmail.com says it has CAPABILITY for AUTHENTICATE PLAIN Failure: error login on [imap.gmail.com] with user [USER2] auth [PLAIN]: 2 NO [AUTHENTICATIONFAILED] Invalid credentials (Failure) I have tried -authmech2 LOGIN as well which returns: Host2: imap.gmail.com says it has NO CAPABILITY for AUTHENTICATE LOGIN Failure: error login on [imap.gmail.com] with user [[email protected]] auth [LOGIN]: 2 NO [AUTHENTICATIONFAILED] Invalid credentials (Failure) If anyone can shed some light on this I would greatly appreciate it.

    Read the article

  • Guice and JSF 2

    - by digitaljoel
    I'm trying to use Guice to inject properties of a JSF managed bean. This is all running on Google App Engine (which may or may not be important) I've followed the instructions here: http://code.google.com/docreader/#p=google-guice&s=google-guice&t=GoogleAppEngine One problem is in the first step. I can't subclass the Servlet module and setup my servlet mappings there because Faces is handled by the javax.faces.webapp.FacesServlet which subclasses Servlet, not HttpServlet. So, I tried leaving my servlet configuration in the web.xml file and simply instantiating a new ServletModel() along with my business module when creating the injector in the context listener described in the second step. Having done all that, along with the web.xml configuration, my managed bean isn't getting any properties injected. The method is as follows @ManagedBean @ViewScoped public class ViewTables implements Serializable { private DataService<Table> service; @Inject public void setService( DataService<Table> service ) { this.service = service; } public List<Table> getTables() { return service.getAll(); } } So, I'm wondering if there is a trick to get Guice injecting into a JSF managed bean? I obviously can't use the constructor injection because JSF needs a no-arg constructor to create the bean.

    Read the article

  • MDX: Problem filtering results in MDX query used in Reporting Services query

    - by wgpubs
    Why aren't my results being filtered by the members from my [Group Hierarchy] returned via the filter() statment below? SELECT NON EMPTY {[Measures].[Group Count], [Measures].[Overall Group Count] } ON COLUMNS, NON EMPTY { [Survey].[Surveys By Year].[Survey Year].ALLMEMBERS * [Response Status].[Response Status].[Response Status].ALLMEMBERS} DIMENSION PROPERTIES MEMBER_CAPTION, MEMBER_UNIQUE_NAME ON ROWS FROM ( SELECT ( { [Survey Type].[Survey Type Hierarchy].&[9] } ) ON COLUMNS FROM ( SELECT ( { [Response Status].[Response Status].[All] } ) ON COLUMNS FROM ( SELECT ( STRTOSET(@SurveySurveysByYear, CONSTRAINED) ) ON COLUMNS FROM ( SELECT(filter([Group].[Group Hierarchy].members, instr(@GroupGroupFullName,[Group].[Group Hierarchy].Properties( "Group Full Name" )))) on columns FROM [SysSurveyDW])))) CELL PROPERTIES VALUE, BACK_COLOR, FORE_COLOR, FORMATTED_VALUE, FORMAT_STRING, FONT_NAME, FONT_SIZE, FONT_FLAGS

    Read the article

  • Unable to stop chrome.exe *32

    - by chipperyman573
    So I was installing roboform today and was unable to stop the process chrome.exe *32... Even when I uninstalled chrome. This is the error I got: I used lockhunter and it said it was located in %appdata%\Local\Google\Chrome. However, it was unable to unlock, delete or rename. When I use explorer to delete or rename that folder, it says it's being used by Chrome. Even after restarting my computer it still does this. I've tried using the built in chrome task manager (Wrench View Background Pages) and I can't seem to find a process there that has the same amount of memory. I have run many, many virus scans, by: Microsoft security essentials AVG (Free version) Malwarebytes (Pro version) Norton 360 (Pro version) McAfee (Pro Version) Avira (Free version) Avast! Antivirus (Free version) None of which returned with any viruses. Chrome info: Google Chrome 23.0.1271.95 (Official Build 169798) OS Windows 7 Professional WebKit 537.11 (@135931) JavaScript V8 3.13.7.5 Flash 11.5.31.2 User Agent Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.11 (KHTML, like Gecko) Chrome/23.0.1271.95 Safari/537.11

    Read the article

  • Using Java Executor on AppEngine causes AccessControlException

    - by Drew
    How do you get java.util.concurrent.Executor or CompletionService to work on Google AppEngine? The classes are all officially white-listed, but I get a runtime security error when trying to submit asynchronous tasks. Code: // uses the async API but this factory makes it so that tasks really // happen sequentially Executor executor = java.util.concurrent.Executors.newSingleThreadExecutor(); // wrap Executor in CompletionService CompletionService<String> completionService = new ExecutorCompletionService<String>(executor); final SomeTask someTask = new SomeTask(); // this line throws exception completionService.submit(new Callable<String>(){ public String call() { return someTask.doNothing("blah"); } }); // alternately, send Runnable task directly to Executor, // which also throws an exception executor.execute(new Runnable(){ public void run() { someTask.doNothing("blah"); } }); } private class SomeTask{ public String doNothing(String message){ return message; } } Exception: java.security.AccessControlException: access denied (java.lang.RuntimePermission modifyThreadGroup) at java.security.AccessControlContext.checkPermission(AccessControlContext.java:323) at java.security.AccessController.checkPermission(AccessController.java:546) at java.lang.SecurityManager.checkPermission(SecurityManager.java:532) at com.google.appengine.tools.development.DevAppServerFactory$CustomSecurityManager.checkPermission(DevAppServerFactory.java:166) at com.google.appengine.tools.development.DevAppServerFactory$CustomSecurityManager.checkAccess(DevAppServerFactory.java:191) at java.lang.ThreadGroup.checkAccess(ThreadGroup.java:288) at java.lang.Thread.init(Thread.java:332) at java.lang.Thread.(Thread.java:565) at java.util.concurrent.Executors$DefaultThreadFactory.newThread(Executors.java:542) at java.util.concurrent.ThreadPoolExecutor.addThread(ThreadPoolExecutor.java:672) at java.util.concurrent.ThreadPoolExecutor.addIfUnderCorePoolSize(ThreadPoolExecutor.java:697) at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:652) at java.util.concurrent.Executors$DelegatedExecutorService.execute(Executors.java:590) at java.util.concurrent.ExecutorCompletionService.submit(ExecutorCompletionService.java:152) This code works fine when run on Tomcat or via command-line JVM. However, it chokes in the AppEngine SDK Jetty container (tried with Eclipse plugin and the maven-gae-plugin). AppEngine is likely designed to not allow potentially dangerous programs to run, so I could see them completely disabling thread creation. However, why would Google allow you to create a class, but not allow you to call methods on it? White-listing java.util.concurrent is misleading. Is there some other way to do parallel/simultaneous/concurrent tasks on GAE?

    Read the article

  • Help Email Account Management among multiple users

    - by CogitoErgoSum
    So I preface this with saying this may belong in IT Security, not too sure feel free to move. Currently we have an email account [email protected] - hosted via google apps (as is all our email). We had an incident where we had to terminate an employee. This employee however had the password for this account as we have 20-30 people utilizing it at any given point to manage customer emails etc. Thinking on this I feel there must be a better way to manage access. With Google you can associate upto 10 email accounts to another the problem is we have more like 20-30 people going. We were evaluating tools such as SalesForce and Assistly where people have their own login credentials and then the system contains the appropriate smtp information for the [email protected] email address to send emails from it rather than a users personal account. Aside from those options does anyone have any other thoughts? One suggestion floated was moving everyone to desktop clients and saving the PW info there so they could only login from their physical workstation but we may have situations where we'd like employees to work remotely. Does anyone have experience with this sort of system where ~20-30 people are responding from one email box and how to manage security and access?

    Read the article

  • Javascript childNodes does not find all children of a div when appendchild has been used

    - by yesterdayze
    Alright, I am hoping someone can help me out. I apologize up front that this one may be confusing. I have included an example to try to help ease the confusion as this is better seen then heard. I have created a webpage that contains a group or set of groups. Each group has a subgroup. In a nutshell what is happening is this page will allow me to combine multiple groups containing subgroups into a new group. The page will give the chance to rename the old subgroups before they are combined into new groups in order to avoid confusion. When a group is renamed it will check to make sure there is not already a group with that name. If there is it will copy itself out of it's own group and into that group and then delete the original. If the group does not already exist it will create that group, copy itself in and then delete the original. Subgroups can also be renamed at which point they will move into the group with the same name if it exists, or create a new one if it doesn't. The page has a main div. The main div contains 'new sub group' divs. Inside each of those is another div containing the 'old sub group' divs. I use a loop through the child nodes of the 'new sub group' div when renaming a group in order to find each child node. These are then copied into a new div within the main div. The crux of the problem is this. If I loop through a DIV and copy all of the DIVs in it into a new or existing DIV all is well. When I then try to take that DIV and copy all of it's DIVs into another or new DIV it always skips one of the moved DIVs. For simplicity I have copied the entire working code below. To recreate the issue click the spot where the image should appear next to the name ewrewrwe and rename it to something else. All is well. Now click that new group the same way and name it something else. You will see it skip one each time. I have linked the page here: http://vtbikenight.com/test.html The link is clean, it is my personal website I use for a local motorycle group I am part of. Thanks for the help everyone!!! Please let me know if I can clarify on anything. I know the code is not the best right now, it is just demo code and my intent is to get the concept working then streamline it all.

    Read the article

  • multiple keys and values with google-collections

    - by flash3000
    Hello, I would like use google-collection in order to save the following file in a Hash with multiple keys and values Key1_1, Key2_1, Key3_1, data1_1, 0, 0 Key1_2, Key2_2, Key3_2, data1_2, 0, 0 Key1_3, Key2_3, Key3_3, data1_3, 0, 0 Key1_4, Key2_4, Key3_4, data1_4, 0, 0 The first three columns are the different keys and the last two integer are the two different values. I have already prepare a code which spilt the lines in chunks. import java.io.BufferedReader; import java.io.FileNotFoundException; import java.io.FileReader; import java.io.IOException; public class HashMapKey { public static void main(String[] args) throws FileNotFoundException, IOException { String inputFile = "inputData.txt"; BufferedReader br = new BufferedReader(new FileReader(inputFile)); String strLine; while ((strLine = br.readLine()) != null) { String[] line = strLine.replaceAll(" ", "").trim().split(","); for (int i = 0; i < line.length; i++) { System.out.print("[" + line[i] + "]"); } System.out.println(); } } } Unfortunately, I do not know how to save these information in google-collection? Thank you in advance. Best regards,

    Read the article

  • really weird DNS problem in Ubuntu {after one month, seems like ISP problem}

    - by OmniWired
    Hello everyone. I been having this random dns problem, in Ubuntu 10.04 and in 10.10 it started about 2 weeks ago after (I believe) an update. Basically when I go to a website randomly I get that the website I'm visiting is not available ("Oops! Google Chrome could not connect to ..." & "This webpage is not available."). I tested with Chromium "7.0.515.0 (58587)" and Firefox minefield (4.0ish) and 3.6.9. I did these 4 things already: /etc/default/grub GRUB_CMDLINE_LINUX="ipv6.disable=1" and this: /etc/sysctl.conf net.ipv6.conf.all.disable_ipv6 = 1 net.ipv6.conf.default.disable_ipv6 = 1 net.ipv6.conf.lo.disable_ipv6 = 1 *disabling Chromium DNS pre-fetching *using Google and OpenDNS servers as well as ISP DNS servers. But didn't improve, also no other computers in my network have the same problem. All computer wired to the same router. I'm a software engineer that run out of ideas, please help me. Thanks in advance. UPDATE: some programs (synaptic / firefox update/ vuze(azureus)) say connection refused for the error. Most of the time a second try will fix the "refusal". UPDATE2: I found out with Wireshark, that everytime I have this problem i've got this 192.168.0.10 8.8.8.8 ICMP Destination unreachable (Port unreachable) Confirmed an ISP error. ISP;Speedy Location: Argentina, Buenos Aires (capital Federal) Area.

    Read the article

  • OS X, Chrome, and Spaces annoyance

    - by David Hollman
    Here's my problem: I use Google Chrome as my web browser on MacOS X Snow Leopard. I am a keyboard shortcut addict, and I use QuickSilver to create keyboard shortcuts for anything I can. One of the most common things that I do is to open a new web browser window. But I use Spaces frequently to partition my tasks that I am currently working on, and when I open a web browser or web page with a QuickSilver trigger, spaces switches to the last space that I used Chrome on and opens a new tab, which often distracts me for hours because it brings me to a different space and thus a different task. I can fix this by right-clicking on the Google Chrome icon and clicking the "New Window" option, which opens a new window on the current space. I have tried to compose an AppleScript to do something like this, with no success. It has become a serious problem. Back when I used Firefox, I solved the problem by changing a preference item that says "Always open pop-up links in a new window" or something like that, which was kind of a sledge hammer approach, but it worked. I can always go back to Firefox, but I thought I'd ask my question here first. Anyone with any ideas?

    Read the article

  • PubSubHubBub Hubs

    - by PartlyCloudy
    Hi, I'm currently building a live web application based upon the PubSubHubBub protocol. However, I encountered several issues. First, I'm in search of a hub application that I can run on my server. There are several applications, but most of them are not mature yet, or they don't support the 0.3 spec. The official google hub runs on the Google App Engine and can even be executed locally. Unfortunately, "Tasks will not run automatically. Push the 'Run' button to execute each task." This behaviour is useful for debugging and understanding the workflow, but in some live tests, it would be nice not to invoke all tasks manually. Is there a way to tweak the local app engine due automatically run tasks? Next, I have a question concerning the spec itself. The Google reference implementation provides the initial publish method bound to the outpoint uri + /publish. But this is not reflected in the specs. So are there any mature hubs that can be run locally for debugging? Or are there ways to configure the offical google app engine hub to run locally and to execute tasks directly? Thanks in advance

    Read the article

  • Is Gmail Being Blocked by my ISP (wait till you read this)?

    - by James
    This is the strangest thing I have ever encountered. I have a desktop on which I cannot access Gmail and also youtube sign in (I believe since youtube is owned by google they both use the same sign in system). So okay, maybe my ISP is blocking these for some reason or maybe my firewall is, or maybe there is something wrong with my connectivity, right? NO. On other computers that uses the same connection via a wireless router I can access both gmail and youtube sign in just fine. On this computer which doesn't have a wireless card and so I have to connect via Ethernet cable (connected to a USB converter since the Ethernet port doesn't work anymore) I can access all sites and services including things like aol and hotmail. But only when it comes to gmail, do I get complete and utter throttling. I even turned off my AV ad Firewall momentarily and no luck. The gmail ages starts to load and by mid point it just stays there loading and loading and loading... never ends. I tried everything, I reset the modem and router multiple times. I reinstalled my operating system from a vista to a windows 7 hoping a complete reinstall would solve the issue, but no luck. So can anyone for the life of them figure out why this could be? And yes, I am going to call my ISP but not to solve this issue, but to cancel them. I want to upgrade to cabel from DSL anyway. I didn't mention my ISP because I'm not sure if that is within the rules (if it's okay some one let me know and I will). P.S. All this happened one day, before gmail was perfectly accessible in this computer. I can't remember anything special that happened on that day prior to this. The only thing I can think of is, my ISP or Google itself is blocking this computer based on it's mac address, but I don't know if that's even done. Additional info: PC: Windows 7 Ultimate 32 bit Connection Type: DSL Connecting Medium: Ethernet cable via USB converter

    Read the article

< Previous Page | 316 317 318 319 320 321 322 323 324 325 326 327  | Next Page >