Search Results

Search found 6668 results on 267 pages for 'global catalog'.

Page 62/267 | < Previous Page | 58 59 60 61 62 63 64 65 66 67 68 69  | Next Page >

  • Require reasonably random results from an SQL SELECT query within a Joomla article (Cache enabled)

    - by Shrinivas
    Setup: Joomla website on LAMP stack I have a MySQL table containing some records, these are queried by a simple SELECT on the Joomla article, as pasted below. This specific Joomla website has Caching turned on in Joomla's Global Configuration. I need to randomize the order in which I display the resultset, each time the page is loaded. Regular php/mysql would offer me two approaches for this: 1. use 'order by RAND()' or any of a number of methods to allow a SELECT query to return reasonably random results. 2. once php gets the result from the SELECT into an array, shuffle the array to get a reasonably random order of array items. However, as this Joomla instance has Caching turned ON in its Global Configuration, either of the above approaches fails. The first time I load the page the order is randomized, however any further reloads do not cause the order to change, as the page is delivered from cache. The instant the Cache is disabled, both approaches (shuffle/order by rand) work perfectly. What am I missing? How do I override the Global Cache for this specific article? A very simple requirement, that is met by both php and mysql reasonably well, is blocked by the Joomla Cache that I cannot turn off. The php that returns results from the database. <pre> $db = JFactory::getDBO(); $select = "SELECT id FROM jos_mytable;"; //order by RAND() $db->setQuery($select); echo $db->getQuery(); //Show me the Query! $rows = $db->loadObjectList(); //shuffle($rows); foreach($rows as $row) { echo "$row->id"; }

    Read the article

  • Javascript Namespace Help

    - by Jason
    Hi, I have a pretty big Javascript script with loads of global variables & functions in it. Then a piece of code that calls one function from this js file: myfunc(); Ok, now I have cloned this script and modified some functionality, all function prototypes and variables are named the same in both scripts. So now I have two scripts loaded and one call to myfunc(), now we have a clash because there are loads of global variables with the same names and two myfunc()s. What I want to do is wrap this cloned script in a namespace, so that I can modify the original call to: clone.myfunc() which will call the new function, but I also want myfunc() to just refer to the original script. In other words I can't touch the original script (no permissions) and I want to be able to use both the clone and the original at runtime. This is the script im cloning: http://pastebin.com/6KR5T3Ah Javascript namespaces seem quite tricky this seems a nice namespace method: var namespace = { foo: function() { } bar: function() { } } ... namespace.foo(); } However that requires using an object, and the script (as posted above) is humongous at nearly 4000 lines, too much to objectize I think? Anyone know a better solution to avoid namespace pollution, with one script I cant touch and one being a clone of that script. Just so I can call myfunc() and clone.myfunc() and all global variables will behave in their respected scope. It's either that, or I go through and modify everything to have unique names, which may take a lifetime This is a Mozilla addon if it helps context wise. Thanks.

    Read the article

  • Is it true that in most Object Oriented Programming Languages, an "i" in an instance method always r

    - by Jian Lin
    In the following code: <script type="text/javascript"> var i = 10; function Circle(radius) { this.r = radius; this.i = radius; } Circle.i = 123; Circle.prototype.area = function() { alert(i); } var c = new Circle(1); var a = c.area(); </script> What is being alerted? The answer is at the end of this question. I found that the i in the alert call either refers to any local (if any), or the global variable. There is no way that it can be the instance variable or the class variable even when there is no local and no global defined. To refer to the instance variable i, we need this.i, and to the class variable i, we need Circle.i. Is this actually true for almost all Object oriented programming languages? Any exception? Are there cases that when there is no local and no global, it will look up the instance variable and then the class variable scope? (or in this case, are those called scope?) the answer is: 10 is being alerted.

    Read the article

  • Undefined Variable? But I defined it...

    - by Rob
    Well before anyone claims that theres a duplicate question... (I've noticed that people who can't answer the question tend to run and look for a duplicate, and then report it.) Here is the duplicate you are looking for: http://stackoverflow.com/questions/2481382/php-claims-my-defined-variable-is-undefined However, this isn't quite a duplicate. It gives me a solution, but I'm not really looking for this particular solution. Here is my problem: Notice: Undefined variable: custom Now here is my code: $headers = apache_request_headers(); // Request the visitor's headers. $customheader = "Header: 7ddb6ffab28bb675215a7d6e31cfc759"; //This is what the header should be. foreach ($headers as $header => $value) { $custom .= "$header: $value"; } Clearly, $custom is defined. According to the other question, it's a global and should be marked as one. But how is it a global? And how can I make it a (non-global)? The script works fine, it still displays what its supposed to and acts correctly, but when I turn on error messages, it simply outputs that notice as well. I suppose its not currently necessary to fix it, but I'd like to anyway, as well as know why its doing this.

    Read the article

  • SQL Binary Microsoft Access - Combining two tables if specific field values are equal

    - by Jordan
    I am new to Microsoft Access and SQL but have a decent programming background and I believe this problem should be relatively simple. I have two tables that I have imported into Access. I will give you a little context. One table is huge and contains generic, global data. The other table is still big but contains specific, regional data. There is only one common field (or column) between the two tables. Let’s call this common field CF. The other fields in both tables are different. I’ll take you through one iteration of what I need to do. I need to take each CF value in the regional, smaller table and find the common CF value in the larger, global table. After finding the match, I need to take the whole “record” or “row” from the global data and copy it over to the corresponding record in the smaller regional table (This should involve creating the new fields). I need to do this for all CF values in the regional, smaller table. I was recommended to use SQL and a binary search, but I am unfamiliar. Let me know if you have any questions. I appreciate the help!

    Read the article

  • How can one make a 'passthru' function in C++ using macros or metaprogramming?

    - by Ryan
    So I have a series of global functions, say: foo_f1(int a, int b, char *c); foo_f2(int a); foo_f3(char *a); I want to make a C++ wrapper around these, something like: MyFoo::f1(int a, int b, char* c); MyFoo::f2(int a); MyFoo::f3(char* a); There's about 40 functions like this, 35 of them I just want to pass through to the global function, the other 5 I want to do something different with. Ideally the implementation of MyFoo.cpp would be something like: PASSTHRU( f1, (int a, int b, char *c) ); PASSTHRU( f2, (int a) ); MyFoo::f3(char *a) { //do my own thing here } But I'm having trouble figuring out an elegant way to make the above PASSTHRU macro. What I really need is something like the mythical X getArgs() below: MyFoo::f1(int a, int b, char *c) { X args = getArgs(); args++; //skip past implicit this.. ::f1(args); //pass args to global function } But short of dropping into assembly I can't find a good implementation of getArgs().

    Read the article

  • Sparc Assembly Call currupts data

    - by Sigge
    I am at the moment working with some assembler code for the Sparc processor family, and i am having some truble with a piece of code.. I think the code and output explains more, but in the short.. When i do a call to the function println my varaibels that i have written to the %fp - 8 memory location is destoryed.. here is my assembler code that i am trying to run !PROCEDURE main .section ".text" .global main .align 4 main: save %sp, -96, %sp L1: set 96, %l0 mov %l0, %o0 call initObject ; nop mov %o0, %l0 mov %l0, %o0 call Test$go ; nop mov %o0, %l0 mov %l0, %o0 call println ; nop L0: ret restore !END main !PROCEDURE Test$go .section ".text" .global Test$go .align 4 Test$go: save %sp, -96, %sp L3: mov %i0, %l0 set 0, %l0 set -8, %l1 add %fp,%l1, %l1 st %l0, [%l1] set 1, %l0 mov %l0, %o0 call println ; nop set -8, %l0 add %fp,%l0, %l0 ld [%l0], %l0 mov %l0, %o0 call println ; nop set 1, %l0 mov %l0, %i0 L2: ret restore !END Test$go Here is the assembler code for the println code .global println .type println,#function println: save %sp,-96,%sp ! block 1 .L193: ! File runtime.c: ! 42 } ! 43 ! 45 /** ! 46 Prints an integer to the standard output stream. ! 47 ! 48 @param i The integer to be printed. ! 49 */ ! 50 void println(int i) { ! 51 printf("%d\n", i); sethi %hi(.L195),%o0 or %o0,%lo(.L195),%o0 call printf mov %i0,%o1 jmp %i7+8 restore This is the out put i get when i run this piece of assembler code 1 67584 1 As u can see, the data that is located at %fp - 8 has been destroyed.. please all feedback is apritiated

    Read the article

  • Fatal error: Allowed memory size exhausted...

    - by Nano HE
    HI, I upload my php testing script to online vps server just now. The script used to parse a big size XML file(about 4M, 7000Lines). But my IE explorer show the online error message below. Fatal error: Allowed memory size of 16777216 bytes exhausted (tried to allocate 77 bytes) in /var/www/test/result/index.php on line 26 I am sure I already tested the php script on localhost successfully. Is there any configuration need be enable/modify on my VPS? Such as php.ini or some setting for apache server? I just verified there are about 200M memory usage are avaliable for my VPS. How can I fix this? ...... function startElementHandler ($parser,$name,$attrib){ global $usercount; global $userdata; global $state; // Line #26; //Debug //print "name is: ".$name."\n"; switch ($name) { case $name=="_ID" : { $userdata[$usercount]["first"] = $attrib["FIRST"]; $userdata[$usercount]["last"] = $attrib["LAST"]; $userdata[$usercount]["nick"] = $attrib["NICK"]; $userdata[$usercount]["title"] = $attrib["TITLE"]; break; } ...... default : {$state=$name;break;} } }

    Read the article

  • Sparc Assembly Call corrupts data

    - by Sigge
    I am at the moment working with some assembler code for the Sparc processor family, and i am having some truble with a piece of code.. I think the code and output explains more, but in the short.. When i do a call to the function println my varaibels that i have written to the %fp - 8 memory location is destoryed.. here is my assembler code that i am trying to run !PROCEDURE main .section ".text" .global main .align 4 main: save %sp, -96, %sp L1: set 96, %l0 mov %l0, %o0 call initObject ; nop mov %o0, %l0 mov %l0, %o0 call Test$go ; nop mov %o0, %l0 mov %l0, %o0 call println ; nop L0: ret restore !END main !PROCEDURE Test$go .section ".text" .global Test$go .align 4 Test$go: save %sp, -96, %sp L3: mov %i0, %l0 set 0, %l0 set -8, %l1 add %fp,%l1, %l1 st %l0, [%l1] set 1, %l0 mov %l0, %o0 call println ; nop set -8, %l0 add %fp,%l0, %l0 ld [%l0], %l0 mov %l0, %o0 call println ; nop set 1, %l0 mov %l0, %i0 L2: ret restore !END Test$go Here is the assembler code for the println code .global println .type println,#function println: save %sp,-96,%sp ! block 1 .L193: ! File runtime.c: ! 42 } ! 43 ! 45 /** ! 46 Prints an integer to the standard output stream. ! 47 ! 48 @param i The integer to be printed. ! 49 */ ! 50 void println(int i) { ! 51 printf("%d\n", i); sethi %hi(.L195),%o0 or %o0,%lo(.L195),%o0 call printf mov %i0,%o1 jmp %i7+8 restore This is the out put i get when i run this piece of assembler code 1 67584 1 As u can see, the data that is located at %fp - 8 has been destroyed.. please all feedback is apritiated

    Read the article

  • gems, bundle command, and ruby not found in new terminal window

    - by user3579987
    So I installed ruby with rvm, ran a bundle install and installed a bunch of gems, etc. It all works just fine in the original terminal window I did all of this in but I opened up a new terminal window and now I'm getting errors like bundle: command not found and gem command not found. I tried doing a symbolic link for the gem command but then when I do gem list it displays a much shorter list of my local gems and not at all the ones I need for bundle install. Is there something I need to do to configure bash or rvm so that it recognizes that I did indeed install all the gems I have installed? name@crunchbang:~/ug_research_portal$ which gem /home/name/.rvm/rubies/ruby-2.1.1/bin/gem name@crunchbang:~/ug_research_portal$ which ruby /home/name/.rvm/rubies/ruby-2.1.1/bin/ruby And in ~/.bashrc: GEM_HOME="/home/name/.rvm/gems/ruby-2.1.1" GEMGLOBAL_HOME="/home/ name/.rvm/gems/ruby-2.1.1@global" export PATH=$PATH:$GEM_HOME/bin:$GEMGLOBAL_HOME/bin:$HOME/.rvm/bin Edit: Looks like my $PATH is somehow wrong? bash: /home/name/bin:/usr/sbin:/sbin:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/home/name/.rvm/gems/ruby-2.1.1/bin:/home/name/.rvm/gems/ruby-2.1.1@global/bin:/home/name/.rvm/bin/: No such file or directory which is odd since ls for /home/name/.rvm/bin and /home/name/.rvm/gems/ruby-2.1.1@global/bin and /home/name/.rvm/gems/ruby-2.1.1/bin work just fine Psych, I was just doing $PATH instead of echo "$PATH" which was giving me the No such file or directory error. The original problem still stands though.

    Read the article

  • cuda 5.0 namespaces for contant memory variable usage

    - by Psypher
    In my program I want to use a structure containing constant variables and keep it on device all long as the program executes to completion. I have several header files containing the declaration of 'global' functions and their respective '.cu' files for their definitions. I kept this scheme because it helps me contain similar code in one place. e.g. all the 'device' functions required to complete 'KERNEL_1' are separated from those 'device' functions required to complete 'KERNEL_2' along with kernels definitions. I had no problems with this scheme during compilation and linking. Until I encountered constant variables. I want to use the same constant variable through all kernels and device functions but it doesn't seem to work. ########################################################################## CODE EXAMPLE ########################################################################### filename: 'common.h' -------------------------------------------------------------------------- typedef struct { double height; double weight; int age; } __CONSTANTS; __constant__ __CONSTANTS d_const; --------------------------------------------------------------------------- filename: main.cu --------------------------------------------------------------------------- #include "common.h" #include "gpukernels.h" int main(int argc, char **argv) { __CONSTANTS T; T.height = 1.79; T.weight = 73.2; T.age = 26; cudaMemcpyToSymbol(d_consts, &T, sizeof(__CONSTANTS)); test_kernel <<< 1, 16 >>>(); cudaDeviceSynchronize(); } --------------------------------------------------------------------------- filename: gpukernels.h --------------------------------------------------------------------------- __global__ void test_kernel(); --------------------------------------------------------------------------- filename: gpukernels.cu --------------------------------------------------------------------------- #include <stdio.h> #include "gpukernels.h" #include "common.h" __global__ void test_kernel() { printf("Id: %d, height: %f, weight: %f\n", threadIdx.x, d_const.height, d_const.weight); } When I execute this code, the kernel executes, displays the thread ids, but the constant values are displayed as zeros. How can I fix this?

    Read the article

  • Should I use a class in this: Reading a XML file using lxml.

    - by PulpFiction
    Hi everyone. This question is in continuation to my previous question, in which I asked about passing around an ElementTree. I need to read the XML files only and to solve this, I decided to create a global ElementTree and then parse it wherever required. My question is: Is this an acceptable practice? I heard global variables are bad. If I don't make it global, I was suggested to make a class. But do I really need to create a class? What benefits would I have from that approach. Note that I would be handling only one ElementTree instance per run, the operations are read-only. If I don't use a class, how and where do I declare that ElementTree so that it available globally? (Note that I would be importing this module) Please answer this question in the respect that I am a beginner to development, and at this stage I can't figure out whether to use a class or just go with the functional style programming approach.

    Read the article

  • Clustered Graphs Visualization Techniques

    - by jameszhao00
    I need to visualize a relatively large graph (6K nodes, 8K edges) that has the following properties: Distinct Clusters. Approximately 50-100 Nodes per cluster and moderate interconnectivity at the cluster level Minimal (5-10 inter-cluster edges per cluster) interconnectivity between clusters Let global edge overlap = The edge overlaps caused by directly visualizing a graph of Clusters = {A, B, C, D, E}, Edges = {Pentagram of those clusters, which is non-planar by the way and will definitely generate edge overlap if you draw it out directly} Let Local Edge Overlap = the above but { A, B, C, D, E } are just nodes. I need to visualize graphs with the above in a way that satisfies the following requirements No global edge overlap (i.e. edge overlaps caused by inter-cluster properties is not okay) Local edge overlap within a cluster is fine Anyone have thoughts on how to best visualize a graph with the requirements above? One solution I've come up with to deal with the global edge overlap is to make sure a cluster A can only have a max of 1 direct edge to another cluster (B) during visualization. Any additional inter-cluster edges between cluster A - C, A - D, ... are disconnected and additional node/edges A - A_C, C - C_A, A - A_D, D - D_A... are created. Anyone have any thoughts?

    Read the article

  • NoSQL with RavenDB and ASP.NET MVC - Part 1

    - by shiju
     A while back, I have blogged NoSQL with MongoDB, NoRM and ASP.NET MVC Part 1 and Part 2 on how to use MongoDB with an ASP.NET MVC application. The NoSQL movement is getting big attention and RavenDB is the latest addition to the NoSQL and document database world. RavenDB is an Open Source (with a commercial option) document database for the .NET/Windows platform developed  by Ayende Rahien.  Raven stores schema-less JSON documents, allow you to define indexes using Linq queries and focus on low latency and high performance. RavenDB is .NET focused document database which comes with a fully functional .NET client API  and supports LINQ. RavenDB comes with two components, a server and a client API. RavenDB is a REST based system, so you can write your own HTTP cleint API. As a .NET developer, RavenDB is becoming my favorite document database. Unlike other document databases, RavenDB is supports transactions using System.Transactions. Also it's supports both embedded and server mode of database. You can access RavenDB site at http://ravendb.netA demo App with ASP.NET MVCLet's create a simple demo app with RavenDB and ASP.NET MVC. To work with RavenDB, do the following steps. Go to http://ravendb.net/download and download the latest build.Unzip the downloaded file.Go to the /Server directory and run the RavenDB.exe. This will start the RavenDB server listening on localhost:8080You can change the port of RavenDB  by modifying the "Raven/Port" appSetting value in the RavenDB.exe.config file.When running the RavenDB, it will automatically create a database in the /Data directory. You can change the directory name data by modifying "Raven/DataDirt" appSetting value in the RavenDB.exe.config file.RavenDB provides a browser based admin tool. When the Raven server is running, You can be access the browser based admin tool and view and edit documents and index using your browser admin tool. The web admin tool available at http://localhost:8080The below is the some screen shots of web admin tool     Working with ASP.NET MVC  To working with RavenDB in our demo ASP.NET MVC application, do the following steps Step 1 - Add reference to Raven Cleint API In our ASP.NET MVC application, Add a reference to the Raven.Client.Lightweight.dll from the Client directory. Step 2 - Create DocumentStoreThe document store would be created once per application. Let's create a DocumentStore on application start-up in the Global.asax.cs. documentStore = new DocumentStore { Url = "http://localhost:8080/" }; documentStore.Initialise(); The above code will create a Raven DB document store and will be listening the server locahost at port 8080    Step 3 - Create DocumentSession on BeginRequest   Let's create a DocumentSession on BeginRequest event in the Global.asax.cs. We are using the document session for every unit of work. In our demo app, every HTTP request would be a single Unit of Work (UoW). BeginRequest += (sender, args) =>   HttpContext.Current.Items[RavenSessionKey] = documentStore.OpenSession(); Step 4 - Destroy the DocumentSession on EndRequest  EndRequest += (o, eventArgs) => {     var disposable = HttpContext.Current.Items[RavenSessionKey] as IDisposable;     if (disposable != null)         disposable.Dispose(); };  At the end of HTTP request, we are destroying the DocumentSession  object.The below  code block shown all the code in the Global.asax.cs  private const string RavenSessionKey = "RavenMVC.Session"; private static DocumentStore documentStore;   protected void Application_Start() { //Create a DocumentStore in Application_Start //DocumentStore should be created once per application and stored as a singleton. documentStore = new DocumentStore { Url = "http://localhost:8080/" }; documentStore.Initialise(); AreaRegistration.RegisterAllAreas(); RegisterRoutes(RouteTable.Routes); //DI using Unity 2.0 ConfigureUnity(); }   public MvcApplication() { //Create a DocumentSession on BeginRequest   //create a document session for every unit of work BeginRequest += (sender, args) =>     HttpContext.Current.Items[RavenSessionKey] = documentStore.OpenSession(); //Destroy the DocumentSession on EndRequest EndRequest += (o, eventArgs) => { var disposable = HttpContext.Current.Items[RavenSessionKey] as IDisposable; if (disposable != null) disposable.Dispose(); }; }   //Getting the current DocumentSession public static IDocumentSession CurrentSession {   get { return (IDocumentSession)HttpContext.Current.Items[RavenSessionKey]; } }  We have setup all necessary code in the Global.asax.cs for working with RavenDB. For our demo app, Let’s write a domain class  public class Category {       public string Id { get; set; }       [Required(ErrorMessage = "Name Required")]     [StringLength(25, ErrorMessage = "Must be less than 25 characters")]     public string Name { get; set;}     public string Description { get; set; }   } We have created simple domain entity Category. Let's create repository class for performing CRUD operations against our domain entity Category.  public interface ICategoryRepository {     Category Load(string id);     IEnumerable<Category> GetCategories();     void Save(Category category);     void Delete(string id);       }    public class CategoryRepository : ICategoryRepository {     private IDocumentSession session;     public CategoryRepository()     {             session = MvcApplication.CurrentSession;     }     //Load category based on Id     public Category Load(string id)     {         return session.Load<Category>(id);     }     //Get all categories     public IEnumerable<Category> GetCategories()     {         var categories= session.LuceneQuery<Category>()                 .WaitForNonStaleResults()             .ToArray();         return categories;       }     //Insert/Update category     public void Save(Category category)     {         if (string.IsNullOrEmpty(category.Id))         {             //insert new record             session.Store(category);         }         else         {             //edit record             var categoryToEdit = Load(category.Id);             categoryToEdit.Name = category.Name;             categoryToEdit.Description = category.Description;         }         //save the document session         session.SaveChanges();     }     //delete a category     public void Delete(string id)     {         var category = Load(id);         session.Delete<Category>(category);         session.SaveChanges();     }        } For every CRUD operations, we are taking the current document session object from HttpContext object. session = MvcApplication.CurrentSession; We are calling the static method CurrentSession from the Global.asax.cs public static IDocumentSession CurrentSession {     get { return (IDocumentSession)HttpContext.Current.Items[RavenSessionKey]; } }  Retrieve Entities  The Load method get the single Category object based on the Id. RavenDB is working based on the REST principles and the Id would be like categories/1. The Id would be created by automatically when a new object is inserted to the document store. The REST uri categories/1 represents a single category object with Id representation of 1.   public Category Load(string id) {    return session.Load<Category>(id); } The GetCategories method returns all the categories calling the session.LuceneQuery method. RavenDB is using a lucen query syntax for querying. I will explain more details about querying and indexing in my future posts.   public IEnumerable<Category> GetCategories() {     var categories= session.LuceneQuery<Category>()             .WaitForNonStaleResults()         .ToArray();     return categories;   } Insert/Update entityFor insert/Update a Category entity, we have created Save method in repository class. If  the Id property of Category is null, we call Store method of Documentsession for insert a new record. For editing a existing record, we load the Category object and assign the values to the loaded Category object. The session.SaveChanges() will save the changes to document store.  //Insert/Update category public void Save(Category category) {     if (string.IsNullOrEmpty(category.Id))     {         //insert new record         session.Store(category);     }     else     {         //edit record         var categoryToEdit = Load(category.Id);         categoryToEdit.Name = category.Name;         categoryToEdit.Description = category.Description;     }     //save the document session     session.SaveChanges(); }  Delete Entity  In the Delete method, we call the document session's delete method and call the SaveChanges method to reflect changes in the document store.  public void Delete(string id) {     var category = Load(id);     session.Delete<Category>(category);     session.SaveChanges(); }  Let’s create ASP.NET MVC controller and controller actions for handling CRUD operations for the domain class Category  public class CategoryController : Controller { private ICategoryRepository categoyRepository; //DI enabled constructor public CategoryController(ICategoryRepository categoyRepository) {     this.categoyRepository = categoyRepository; } public ActionResult Index() {         var categories = categoyRepository.GetCategories();     if (categories == null)         return RedirectToAction("Create");     return View(categories); }   [HttpGet] public ActionResult Edit(string id) {     var category = categoyRepository.Load(id);         return View("Save",category); } // GET: /Category/Create [HttpGet] public ActionResult Create() {     var category = new Category();     return View("Save", category); } [HttpPost] public ActionResult Save(Category category) {     if (!ModelState.IsValid)     {         return View("Save", category);     }           categoyRepository.Save(category);         return RedirectToAction("Index");     }        [HttpPost] public ActionResult Delete(string id) {     categoyRepository.Delete(id);     var categories = categoyRepository.GetCategories();     return PartialView("CategoryList", categories);      }        }  RavenDB is an awesome document database and I hope that it will be the winner in .NET space of document database world.  The source code of demo application available at http://ravenmvc.codeplex.com/

    Read the article

  • "Error in the Site Data Web Service." when performing crawl

    - by Janis Veinbergs
    Installed SharePoint Services v3 (SP2, october 2009 cumulative updates, Language Pack), attached to a content database I had previously (all works). Installed Search server 2008 Express (with language pack) on top of WSS and crawl does not work. However it works for newly created web application + database. Was playing around with accounts, permissions to try get it working. Currently I have WSS_Crawler account with such permissions: Office Search Server runs with WSS_Crawler account Config database has read permissions for WSS_Crawler Content database has read permissions for WSS_Crawler WSS_Crawler is owner of search database. Added WSS_Crawler to SQL server browser user group and administrator Yes, i'v given more permissions than needed, but it doesn't even work with that and i don't know if its permission problem or what. Crawl log says there is Error in the Site Data Web Service., nothing more. There were known issues with a similar error: Error in the Site Data Web Service. (Value does not fall within the expected range.), but this is not the case as thats an old issue and i hope it has been included in SP2... Logs are from olders to newest (descending order). They don't appear to be very helpful. Crawl log http://serveris Crawled Local Office SharePoint Server sites 3/15/2010 9:39 AM sts3://serveris Crawled Local Office SharePoint Server sites 3/15/2010 9:39 AM sts3://serveris/contentdbid={55180cfa-9d2d-46e4... Crawled Local Office SharePoint Server sites 3/15/2010 9:39 AM http://serveris/test Error in the Site Data Web Service. Local Office SharePoint Server sites 3/15/2010 9:39 AM http://serveris Error in the Site Data Web Service. Local Office SharePoint Server sites 3/15/2010 9:39 AM EventLog No errors in EventLog, just some Information events that Office Server Search provides The search service started. Successfully stored the application configuration registry snapshot in the database. Context: Application 'SharedServices Component: da1288b2-4109-4219-8c0c-3a22802eb842 Catalog: Portal_Content. A master merge was started due to an external request. Component: da1288b2-4109-4219-8c0c-3a22802eb842 A master merge has completed for catalog Portal_Content. Component: da1288b2-4109-4219-8c0c-3a22802eb842 Catalog: AnchorProject. A master merge was started due to an external request. Component: da1288b2-4109-4219-8c0c-3a22802eb842 A master merge has completed for catalog AnchorProject. ULS Log Just some information, but no exceptions, unexpected errors 03/15/2010 09:03:28.28 mssearch.exe (0x1B2C) 0x0E8C Search Server Common GatherStatus 0 Monitorable Insert crawl 771 to inprogress queue hr 0x00000000 - File:d:\office\source\search\search\gather\server\gatherobj.cxx Line:6591 03/15/2010 09:03:28.28 mssearch.exe (0x1B2C) 0x0E8C Search Server Common GatherStatus 0 Monitorable Request Start Crawl 1, project Portal_Content, crawl 771 - File:d:\office\source\search\search\gather\server\gatherobj.cxx Line:2875 03/15/2010 09:03:28.28 mssearch.exe (0x1B2C) 0x0E8C Search Server Common GatherStatus 0 Monitorable Advise status change 1, project Portal_Content, crawl 771 - File:d:\office\source\search\search\gather\server\gatherobj.cxx Line:4853 03/15/2010 09:03:28.28 w3wp.exe (0x1D98) 0x0958 Search Server Common MS Search Administration 8wn6 Information A full crawl was started on 'Local Office SharePoint Server sites' by BALTICOVO\janis.veinbergs. 03/15/2010 09:03:28.43 mssdmn.exe (0x1750) 0x10F8 ULS Logging Unified Logging Service 8wsv High ULS Init Completed (mssdmn.exe, Microsoft.Office.Server.Native.dll) 03/15/2010 09:03:30.48 mssdmn.exe (0x1750) 0x09C0 Search Server Common MS Search Indexing 8z0v Medium Create CCache 03/15/2010 09:03:30.56 mssdmn.exe (0x1750) 0x09C0 Search Server Common MS Search Indexing 8z0z Medium Create CUserCatalogCache 03/15/2010 09:03:32.06 w3wp.exe (0x1D98) 0x0958 Search Server Common MS Search Administration 90ge Medium SQL: dbo.proc_MSS_PropagationGetQueryServers 03/15/2010 09:03:32.09 w3wp.exe (0x1D98) 0x0958 Search Server Common MS Search Administration 7phq High GetProtocolConfigHelper failed in GetNotesInterface(). 03/15/2010 09:03:34.26 mssearch.exe (0x1B2C) 0x16A4 Search Server Common GatherStatus 0 Monitorable Advise status change 12, project Portal_Content, crawl -1 - File:d:\office\source\search\search\gather\server\gatherobj.cxx Line:4853 03/15/2010 09:03:35.92 mssearch.exe (0x1B2C) 0x16A4 Search Server Common GatherStatus 0 Monitorable Advise status change 12, project Portal_Content, crawl -1 - File:d:\office\source\search\search\gather\server\gatherobj.cxx Line:4853 03/15/2010 09:03:37.32 mssearch.exe (0x1B2C) 0x16A4 Search Server Common GatherStatus 0 Monitorable Advise status change 12, project Portal_Content, crawl -1 - File:d:\office\source\search\search\gather\server\gatherobj.cxx Line:4853 03/15/2010 09:03:37.23 mssdmn.exe (0x1750) 0x1850 Search Server Common MS Search Indexing 8z14 Medium Test TRACE (NULL):(null), (NULL)(null), (CrLf): , end 03/15/2010 09:03:39.04 mssearch.exe (0x1B2C) 0x16A4 Search Server Common GatherStatus 0 Monitorable Advise status change 12, project Portal_Content, crawl -1 - File:d:\office\source\search\search\gather\server\gatherobj.cxx Line:4853 03/15/2010 09:03:40.98 mssdmn.exe (0x1750) 0x0B24 Search Server Common MS Search Indexing 7how Monitorable GetWebDefaultPage fail. error 2147755542, strWebUrl http://serveris 03/15/2010 09:03:41.87 mssdmn.exe (0x1750) 0x1260 Search Server Common PHSts 0 Monitorable CSTS3Accessor::GetSubWebListItemAccessURL GetAccessURL failed: Return error to caller, hr=80042616 - File:d:\office\source\search\search\gather\protocols\sts3\sts3acc.cxx Line:505 03/15/2010 09:03:41.87 mssdmn.exe (0x1750) 0x1260 Search Server Common PHSts 0 Monitorable CSTS3Accessor::Init: GetSubWebListItemAccessURL failed. Return error to caller, hr=80042616 - File:d:\office\source\search\search\gather\protocols\sts3\sts3acc.cxx Line:348 03/15/2010 09:03:41.87 mssdmn.exe (0x1750) 0x1260 Search Server Common PHSts 0 Monitorable CSTS3Accessor::Init fails, Url sts3://serveris/siteurl=test/siteid={390611b2-55f3-4a99-8600-778727177a28}/weburl=/webid={fb0e4bff-65d5-4ded-98d5-fd099456962b}, hr=80042616 - File:d:\office\source\search\search\gather\protocols\sts3\sts3handler.cxx Line:243 03/15/2010 09:03:41.87 mssdmn.exe (0x1750) 0x1260 Search Server Common PHSts 0 Monitorable CSTS3Handler::CreateAccessorExB: Return error to caller, hr=80042616 - File:d:\office\source\search\search\gather\protocols\sts3\sts3handler.cxx Line:261 03/15/2010 09:03:40.98 mssdmn.exe (0x1750) 0x1260 Search Server Common MS Search Indexing 7how Monitorable GetWebDefaultPage fail. error 2147755542, strWebUrl http://serveris/test 03/15/2010 09:03:41.90 mssdmn.exe (0x1750) 0x0B24 Search Server Common PHSts 0 Monitorable CSTS3Accessor::GetSubWebListItemAccessURL GetAccessURL failed: Return error to caller, hr=80042616 - File:d:\office\source\search\search\gather\protocols\sts3\sts3acc.cxx Line:505 03/15/2010 09:03:41.90 mssdmn.exe (0x1750) 0x0B24 Search Server Common PHSts 0 Monitorable CSTS3Accessor::Init: GetSubWebListItemAccessURL failed. Return error to caller, hr=80042616 - File:d:\office\source\search\search\gather\protocols\sts3\sts3acc.cxx Line:348 03/15/2010 09:03:41.90 mssdmn.exe (0x1750) 0x0B24 Search Server Common PHSts 0 Monitorable CSTS3Accessor::Init fails, Url sts3://serveris/siteurl=/siteid={505443fa-ef12-4f1e-a04b-d5450c939b78}/weburl=/webid={c5a4f8aa-9561-4527-9e1a-b3c23200f11c}, hr=80042616 - File:d:\office\source\search\search\gather\protocols\sts3\sts3handler.cxx Line:243 03/15/2010 09:03:41.90 mssdmn.exe (0x1750) 0x0B24 Search Server Common PHSts 0 Monitorable CSTS3Handler::CreateAccessorExB: Return error to caller, hr=80042616 - File:d:\office\source\search\search\gather\protocols\sts3\sts3handler.cxx Line:261 03/15/2010 09:03:43.26 mssearch.exe (0x1B2C) 0x0750 Search Server Common GatherStatus 0 Monitorable Advise status change 24, project Portal_Content, crawl 771 - File:d:\office\source\search\search\gather\server\gatherobj.cxx Line:4853 03/15/2010 09:03:43.26 mssearch.exe (0x1B2C) 0x1804 Search Server Common GatherStatus 0 Monitorable Remove crawl 771 from inprogress queue - File:d:\office\source\search\search\gather\server\gatherobj.cxx Line:6722 03/15/2010 09:03:43.26 mssearch.exe (0x1B2C) 0x0750 Search Server Common GatherStatus 0 Monitorable Advise status change 12, project Portal_Content, crawl -1 - File:d:\office\source\search\search\gather\server\gatherobj.cxx Line:4853 03/15/2010 09:03:44.65 mssearch.exe (0x1B2C) 0x1804 Search Server Common GatherStatus 0 Monitorable Insert crawl 772 to inprogress queue hr 0x00000000 - File:d:\office\source\search\search\gather\server\gatherobj.cxx Line:6591 03/15/2010 09:03:44.65 mssearch.exe (0x1B2C) 0x1804 Search Server Common GatherStatus 0 Monitorable Request Start Crawl 0, project AnchorProject, crawl 772 - File:d:\office\source\search\search\gather\server\gatherobj.cxx Line:2875 03/15/2010 09:03:44.65 mssearch.exe (0x1B2C) 0x1804 Search Server Common GatherStatus 0 Monitorable Advise status change 0, project AnchorProject, crawl 772 - File:d:\office\source\search\search\gather\server\gatherobj.cxx Line:4853 03/15/2010 09:03:44.65 mssearch.exe (0x1B2C) 0x1804 Search Server Common GatherStatus 0 Monitorable Unlock Queue, project Portal_Content - File:d:\office\source\search\search\gather\server\gatherobj.cxx Line:2922 03/15/2010 09:03:44.82 mssearch.exe (0x1B2C) 0x1DD0 Search Server Common GathererSql 0 Monitorable CGatherer::LoadTransactionsFromCrawlInternal Flush anchor, count 0 - File:d:\office\source\search\search\gather\server\gatherobj.cxx Line:4943 03/15/2010 09:03:44.95 mssearch.exe (0x1B2C) 0x0750 Search Server Common GatherStatus 0 Monitorable Advise status change 12, project AnchorProject, crawl -1 - File:d:\office\source\search\search\gather\server\gatherobj.cxx Line:4853 03/15/2010 09:03:46.51 mssearch.exe (0x1B2C) 0x0750 Search Server Common GatherStatus 0 Monitorable Advise status change 12, project AnchorProject, crawl -1 - File:d:\office\source\search\search\gather\server\gatherobj.cxx Line:4853 03/15/2010 09:03:46.39 mssearch.exe (0x1B2C) 0x1E4C Search Server Common GathererSql 0 Monitorable CGatherer::LoadTransactionsFromCrawlInternal Flush anchor, count 0 - File:d:\office\source\search\search\gather\server\gatherobj.cxx Line:4943 03/15/2010 09:03:49.01 mssearch.exe (0x1B2C) 0x1C6C Search Server Common GathererSql 0 Monitorable CGatherer::LoadTransactionsFromCrawlInternal Flush anchor, count 1 - File:d:\office\source\search\search\gather\server\gatherobj.cxx Line:4943 03/15/2010 09:03:49.87 mssearch.exe (0x1B2C) 0x155C Search Server Common GathererSql 0 Monitorable CGatherer::LoadTransactionsFromCrawlInternal Flush anchor, count 1 - File:d:\office\source\search\search\gather\server\gatherobj.cxx Line:4943 03/15/2010 09:03:49.29 mssearch.exe (0x1B2C) 0x155C Search Server Common GathererSql 0 Monitorable CGatherer::LoadTransactionsFromCrawlInternal Flush anchor, count 1 - File:d:\office\source\search\search\gather\server\gatherobj.cxx Line:4943 03/15/2010 09:03:49.53 mssearch.exe (0x1B2C) 0x155C Search Server Common GathererSql 0 Monitorable CGatherer::LoadTransactionsFromCrawlInternal Flush anchor, count 1 - File:d:\office\source\search\search\gather\server\gatherobj.cxx Line:4943 03/15/2010 09:03:49.67 mssearch.exe (0x1B2C) 0x155C Search Server Common GathererSql 0 Monitorable CGatherer::LoadTransactionsFromCrawlInternal Flush anchor, count 1 - File:d:\office\source\search\search\gather\server\gatherobj.cxx Line:4943 03/15/2010 09:03:49.82 mssearch.exe (0x1B2C) 0x155C Search Server Common GathererSql 0 Monitorable CGatherer::LoadTransactionsFromCrawlInternal Flush anchor, count 1 - File:d:\office\source\search\search\gather\server\gatherobj.cxx Line:4943 03/15/2010 09:03:49.84 mssearch.exe (0x1B2C) 0x155C Search Server Common GathererSql 0 Monitorable CGatherer::LoadTransactionsFromCrawlInternal Flush anchor, count 0 - File:d:\office\source\search\search\gather\server\gatherobj.cxx Line:4943 03/15/2010 09:03:49.89 mssearch.exe (0x1B2C) 0x155C Search Server Common GathererSql 0 Monitorable CGatherer::LoadTransactionsFromCrawlInternal Flush anchor, count 0 - File:d:\office\source\search\search\gather\server\gatherobj.cxx Line:4943 03/15/2010 09:03:49.90 mssearch.exe (0x1B2C) 0x0750 Search Server Common GatherStatus 0 Monitorable Advise status change 12, project AnchorProject, crawl -1 - File:d:\office\source\search\search\gather\server\gatherobj.cxx Line:4853 03/15/2010 09:03:51.42 mssearch.exe (0x1B2C) 0x1E4C Search Server Common GatherStatus 0 Monitorable Advise status change 4, project AnchorProject, crawl 772 - File:d:\office\source\search\search\gather\server\gatherobj.cxx Line:4853 03/15/2010 09:03:51.00 mssearch.exe (0x1B2C) 0x1E4C Search Server Common GathererSql 0 Monitorable CGatherer::LoadTransactionsFromCrawlInternal Flush anchor, count 0 - File:d:\office\source\search\search\gather\server\gatherobj.cxx Line:4943 03/15/2010 09:03:51.42 mssearch.exe (0x1B2C) 0x1CCC Search Server Common GatherStatus 0 Monitorable Remove crawl 772 from inprogress queue - File:d:\office\source\search\search\gather\server\gatherobj.cxx Line:6722 03/15/2010 09:03:52.96 mssearch.exe (0x1B2C) 0x1CCC Search Server Common GatherStatus 0 Monitorable Insert crawl 773 to inprogress queue hr 0x00000000 - File:d:\office\source\search\search\gather\server\gatherobj.cxx Line:6591 03/15/2010 09:03:52.96 mssearch.exe (0x1B2C) 0x1CCC Search Server Common GatherStatus 0 Monitorable Request Start Crawl 0, project AnchorProject, crawl 773 - File:d:\office\source\search\search\gather\server\gatherobj.cxx Line:2875 03/15/2010 09:03:55.29 mssearch.exe (0x1B2C) 0x1CCC Search Server Common GatherStatus 0 Monitorable Unlock Queue, project AnchorProject - File:d:\office\source\search\search\gather\server\gatherobj.cxx Line:2922 03/15/2010 09:03:55.29 mssearch.exe (0x1B2C) 0x1CCC Search Server Common GatherStatus 0 Monitorable Removed start crawl request from Queue 0, crawl 773 - File:d:\office\source\search\search\gather\server\gatherobj.cxx Line:2942 03/15/2010 09:03:55.29 mssearch.exe (0x1B2C) 0x1CCC Search Server Common GatherStatus 0 Monitorable Request Start Crawl 0, project AnchorProject, crawl 773 - File:d:\office\source\search\search\gather\server\gatherobj.cxx Line:2875 03/15/2010 09:03:55.29 mssearch.exe (0x1B2C) 0x1CCC Search Server Common GatherStatus 0 Monitorable Advise status change 0, project AnchorProject, crawl 773 - File:d:\office\source\search\search\gather\server\gatherobj.cxx Line:4853 03/15/2010 09:03:55.37 mssearch.exe (0x1B2C) 0x1CCC Search Server Common GathererSql 0 Monitorable CGatherer::LoadTransactionsFromCrawlInternal Flush anchor, count 0 - File:d:\office\source\search\search\gather\server\gatherobj.cxx Line:4943 03/15/2010 09:03:55.37 mssearch.exe (0x1B2C) 0x0750 Search Server Common GatherStatus 0 Monitorable Advise status change 12, project AnchorProject, crawl -1 - File:d:\office\source\search\search\gather\server\gatherobj.cxx Line:4853 03/15/2010 09:03:56.71 mssearch.exe (0x1B2C) 0x1E4C Search Server Common GathererSql 0 Monitorable CGatherer::LoadTransactionsFromCrawlInternal Flush anchor, count 0 - File:d:\office\source\search\search\gather\server\gatherobj.cxx Line:4943 03/15/2010 09:03:56.78 mssearch.exe (0x1B2C) 0x0750 Search Server Common GatherStatus 0 Monitorable Advise status change 12, project AnchorProject, crawl -1 - File:d:\office\source\search\search\gather\server\gatherobj.cxx Line:4853 03/15/2010 09:03:58.40 mssearch.exe (0x1B2C) 0x155C Search Server Common GathererSql 0 Monitorable CGatherer::LoadTransactionsFromCrawlInternal Flush anchor, count 0 - File:d:\office\source\search\search\gather\server\gatherobj.cxx Line:4943 03/15/2010 09:03:58.89 mssearch.exe (0x1B2C) 0x155C Search Server Common GatherStatus 0 Monitorable Advise status change 4, project AnchorProject, crawl 773 - File:d:\office\source\search\search\gather\server\gatherobj.cxx Line:4853 03/15/2010 09:03:58.89 mssearch.exe (0x1B2C) 0x1130 Search Server Common GatherStatus 0 Monitorable Remove crawl 773 from inprogress queue - File:d:\office\source\search\search\gather\server\gatherobj.cxx Line:6722 03/15/2010 09:03:58.89 mssearch.exe (0x1B2C) 0x1130 Search Server Common GatherStatus 0 Monitorable Unlock Queue, project AnchorProject - File:d:\office\source\search\search\gather\server\gatherobj.cxx Line:2922 What could be wrong here - any clues?

    Read the article

  • Unable to install SQL 2008 on Windows 7

    - by Axel
    SQL 2008 install hangs on Windows 7 The story: Trying to install SQL2008 on Windows 7 hangs on SqlEngineDBStartconfigAction_install_configrc_Cpu32. What I Tried: Uninstall hangs on validation Manual uninstall using msiinv.exe and msiexec /x works Added SQL service accounts to local admins no help Turn of UAC no help Last lines in setup log: 2010-04-01 16:18:05 SQLEngine: : Checking Engine checkpoint 'GetSqlServerProcessHandle' 2010-04-01 16:18:05 SQLEngine: --SqlServerServiceSCM: Waiting for nt event 'Global\sqlserverRecComplete' to be created 2010-04-01 16:18:07 SQLEngine: --SqlServerServiceSCM: Waiting for nt event 'Global\sqlserverRecComplete' or sql process handle to be signaled 2010-04-01 16:18:07 SQLEngine: : Checking Engine checkpoint 'WaitSqlServerStartEvents' 2010-04-01 16:18:53 Slp: Sco: Attempting to initialize script 2010-04-01 16:18:53 Slp: Sco: Attempting to initialize default connection string 2010-04-01 16:18:53 Slp: Sco: Attempting to set script connection protocol to NotSpecified 2010-04-01 16:18:53 Slp: Sco: Attempting to set script connection protocol to NamedPipes 2010-04-01 16:18:53 SQLEngine: --SqlDatabaseServiceConfig: Connection String: Data Source=\\.\pipe\SQLLocal\MSSQLSERVER;Initial Catalog=master;Integrated Security=True;Pooling=False;Network Library=dbnmpntw;Application Name=SqlSetup 2010-04-01 16:18:53 SQLEngine: : Checking Engine checkpoint 'ServiceConfigConnect' 2010-04-01 16:18:53 SQLEngine: --SqlDatabaseServiceConfig: Connecting to SQL.... 2010-04-01 16:18:53 Slp: Sco: Attempting to connect script 2010-04-01 16:18:53 Slp: Connection string: Data Source=\\.\pipe\SQLLocal\MSSQLSERVER;Initial Catalog=master;Integrated Security=True;Pooling=False;Network Library=dbnmpntw;Application Name=SqlSetup And now comes the fun part: When I open conf mgr I can see the service running, I enabled named pipes and TCP/IP, restarted the service I'm able to connect to the server using an OLE DB connection but not with the Native Client. And what I find suspicious is the following error in my app log: .NET Runtime Optimization Service (clr_optimization_v2.0.50727_32) - Failed to compile: C:\Program Files\Microsoft SQL Server\100\Tools\Binn\VSShell\Common7\Tools\VDT\DataProjects.dll . Error code = 0x8007000b In MS connect this is reported as a bug but MS is unable to reproduce the problem altough when you search the fora I'm not the only one with this problem. So any help is appreciated.

    Read the article

  • What exactly is a Mobile mouse? + Mouse Recommendation

    - by chobo2
    I am really disappointed with Logitech. My first cordless wireless mouse was from them and it lasted like 5 years. So I decided to get another one from them http://www.futureshop.ca/catalog/proddetail.asp?logon=&langid=EN&sku_id=0665000FS10099373&catid= And this mouse sucks bad. After 6 months it broke. I returned it under warranty and got a new one now 4-6 month later it is on the verge of breaking again.... You pay like $50 for this mouse and it lasts like 6 months that sad. I just lost faith in Logitech mice as I remember my bro also had a logitech mouse too and it broke after like 6 months. He then bought another logitech mouse(different model) that has been working for maybe 2 years(and no signs of breaking) but I am not crazy about the mouse(I don't like the 2 buttons by the wheel) and I not sure if they even sell it(maybe they got rid of it because it lasts too long). http://www.tigerdirect.ca/applications/SearchTools/item-details.asp?EdpNo=1578495&CatId=1285 So I am looking at a Microsoft mouse. http://www.futureshop.ca/catalog/proddetail.asp?logon=&langid=EN&sku_id=0665000FS10125565&catid= I am looking at this one but I am not sure what they mean by mobile mouse. I think that is what MS calls notebook mice. So I am not sure if this would be a good mouse to get for a desktop. I see it uses like a micro usb receiver but I am not sure if it is smaller then a standard mouse. But almost all the mice I looked at at futureshop.ca or staples are labeled notebook mice or mobile mice. So not sure what mice would be right for me. I don't want a corded one though. I really liked the LX6 design alot but it can't last more than 6 months. Thanks

    Read the article

  • Puppet inventory service using puppetdb

    - by Oli
    I have 3 servers set up. A puppet master using passenger (puppet-server1), dashboard using passenger (puppet-server2) and puppetdb (puppet-server3). I cannot get the inventory service working in the dashboard. The puppet master is able to sign certs and hand out manifests. The nodes have checked in to the dashboard ok The puppetdb appears to be working - logs files as follows: 2012-12-13 17:53:10,899 INFO [command-proc-74] [puppetdb.command] [8490148f-865a-45c8-b5b5-2c8824d753dd] [replace facts] puppet-server3.test.net 2012-12-13 17:53:11,041 INFO [command-proc-74] [puppetdb.command] [dfcc5168-06df-41d4-9a97-77b4cd3f4a2b] [replace catalog] puppet-server3.test.net 2012-12-13 17:55:28,600 INFO [command-proc-74] [puppetdb.command] [b2cc0a96-0404-49f5-96ad-19c778508d3d] [replace facts] puppet-client2.test.net 2012-12-13 17:55:28,729 INFO [command-proc-74] [puppetdb.command] [4dc4b8f3-06df-4dad-a89a-92ac80447b99] [replace catalog] puppet-client2.test.net The puppet master has the following configured in puppet.conf [master] certname = puppet-server1.test.net storeconfigs = true storeconfigs_backend = puppetdb reports = store, http reporturl = http://puppet-server2.test.net/reports/upload The puppet master have the following configured in auth.conf #access for puppet dashboard facts path /facts auth yes method find, search allow dashboard The puppet dashboard has this configured in /usr/share/puppet-dashboard/config/settings.yml # Hostname of the inventory server. inventory_server: 'puppet-server3.test.net' # Port for the inventory server. inventory_port: 8081 The inventory is on as I see a link to the inventory in the dashboard server But I am getting this error: Inventory Could not retrieve facts from inventory service: SSL_connect SYSCALL returned=5 errno=0 state=SSLv3 read finished A clearly an SSL error - but I have followed the documentation and have no idea how to fix this. Can anyone help please? Oli

    Read the article

  • How to Remove Header in LaTex

    - by Tim
    Hi, I would like to not show the name of each chapter on the header of its each page. I also like to have nothing in the headers for abstract, acknowledgement, table of content, list of figures and list of tables. But currently I have header on each page, for example: Here is my code when specifying these parts in my tex file \begin{abstract} ... \end{abstract} \begin{acknowledgement} ... \end{acknowledgement} % generate table of contents \tableofcontents % generate list of tables \listoftables % generate list of figures \listoffigures \chapter{Introduction} \label{chp1} %% REFERENCES \appendix \input{appendiximages.tex} \bibliographystyle{plain} %%\bibliographystyle{abbrvnat} \bibliography{thesis} Here is what I believe to be the definition of the commands. More details can be found in these two files jhu12.clo and thesis.cls. % \chapter: \def\chaptername{Chapter} % ABSTRACT % MODIFIED to include section name in headers \def\abstract{ \newpage \dsp \chapter*{\abstractname\@mkboth{\uppercase{\abstractname}}{\uppercase{\abstractname}}} \fmfont \vspace{8pt} \addcontentsline{toc}{chapter}{\abstractname} } \def\endabstract{\par\vfil\null} % DEDICATION % Modified to make dedication its own section %\newenvironment{dedication} %{\begin{alwayssingle}} %{\end{alwayssingle}} \def\dedication{ \newpage \dsp \chapter*{Dedication\@mkboth{DEDICATION}{DEDICATION}} \fmfont} \def\endacknowledgement{\par\vfil\null} % ACKNOWLEDGEMENTS % MODIFIED to include section name in headers \def\acknowledgement{ \newpage \dsp \chapter*{\acknowledgename\@mkboth{\uppercase{\acknowledgename}}{\uppercase{\acknowledgename}}} \fmfont \addcontentsline{toc}{chapter}{\acknowledgename} } \def\endacknowledgement{\par\vfil\null} \def\thechapter {\arabic{chapter}} % \@chapapp is initially defined to be '\chaptername'. The \appendix % command redefines it to be '\appendixname'. % \def\@chapapp{\chaptername} \def\chapter{ \clearpage \thispagestyle{plain} \if@twocolumn % IF two-column style \onecolumn % THEN \onecolumn \@tempswatrue % @tempswa := true \else \@tempswafalse % ELSE @tempswa := false \fi \dsp % double spacing \secdef\@chapter\@schapter} % TABLEOFCONTENTS % In ucthesis style, \tableofcontents, \listoffigures, etc. are always % set in single-column style. @restonecol \def\tableofcontents{\@restonecolfalse \if@twocolumn\@restonecoltrue\onecolumn\fi %%%%% take care of getting page number in right spot %%%%% \clearpage % starts new page \thispagestyle{botcenter} % Page style of frontmatter is botcenter \global\@topnum\z@ % Prevents figures from going at top of page %%%%% \@schapter{\contentsname \@mkboth{\uppercase{\contentsname}}{\uppercase{\contentsname}}}% {\ssp\@starttoc{toc}}\if@restonecol\twocolumn\fi} \def\l@part#1#2{\addpenalty{-\@highpenalty}% \addvspace{2.25em plus\p@}% space above part line \begingroup \@tempdima 3em % width of box holding part number, used by \parindent \z@ \rightskip \@pnumwidth %% \numberline \parfillskip -\@pnumwidth {\large \bfseries % set line in \large boldface \leavevmode % TeX command to enter horizontal mode. #1\hfil \hbox to\@pnumwidth{\hss #2}}\par \nobreak % Never break after part entry \global\@nobreaktrue %% Added 24 May 89 as \everypar{\global\@nobreakfalse\everypar{}}%% suggested by %% Jerry Leichter \endgroup} % LIST OF FIGURES % % Single-space list of figures, add it to the table of contents. \def\listoffigures{\@restonecolfalse \if@twocolumn\@restonecoltrue\onecolumn\fi %%%%% take care of getting page number in right spot %%%%% \clearpage \thispagestyle{botcenter} % Page style of frontmatter is botcenter \global\@topnum\z@ % Prevents figures from going at top of page. \@schapter{\listfigurename\@mkboth{\uppercase{\listfigurename}}% {\uppercase{\listfigurename}}} \addcontentsline{toc}{chapter}{\listfigurename} {\ssp\@starttoc{lof}}\if@restonecol\twocolumn\fi} \def\l@figure{\@dottedtocline{1}{1.5em}{2.3em}} % bibliography \def\thebibliography#1{\chapter*{\bibname\@mkboth {\uppercase{\bibname}}{\uppercase{\bibname}}} \addcontentsline{toc}{chapter}{\bibname} \list{\@biblabel{\arabic{enumiv}}}{\settowidth\labelwidth{\@biblabel{#1}}% \leftmargin\labelwidth \advance\leftmargin\labelsep \usecounter{enumiv}% \let\p@enumiv\@empty \def\theenumiv{\arabic{enumiv}}}% \def\newblock{\hskip .11em plus.33em minus.07em}% \sloppy\clubpenalty4000\widowpenalty4000 \sfcode`\.=\@m} Thanks and regards! EDIT: I just replaced \thispagestyle{botcenter} with \thispagestyle{plain}. The latter is said to clear the header (http://en.wikibooks.org/wiki/LaTeX/Page_Layout), but it does not. How shall I do? Thanks!

    Read the article

  • What is the difference between Anycast and GeoDNS / GeoIP wrt HA?

    - by Riyad
    Based on the Wikipedia description of Anycast, it includes both the distribution of a domain-name-to-many-IP-mapping across many DNS servers as well as replying to clients with the most geographically close (or fastest) server. In the context of a globally distributed, highly available site like google.com (or any CDN service with many global edge locations) this sounds like the two key features one would need. DNS services like Amazon's Route53, EasyDNS and DNSMadeEasy all advertise themselves as Anycast-enabled networks. Therefore my assumption is that each of these DNS services transparently offer me those two killer features: multi-IP-to-domain mapping AND routing clients to the closest node. However, each of these services seem to separate out these two functionalities, referring to the 2nd one (routing clients to closest node) as "GeoDNS", "GeoIP" or "Global Traffic Director" and charge extra for the service. If a core tenant of an Anycast-capable system is to already do this, why is this functionality being earmarked as this extra feature? What is this "GeoDNS" feature doing that a standard Anycast DNS service won't do (according to the definition of Anycast from Wikipedia -- I understand what is being advertised, just not why it isn't implied already). I get extra-confused when a DNS service like Route53 that doesn't support this nebulous "GeoDNS" feature lists functionality like: Fast – Using a global anycast network of DNS servers around the world, Route 53 is designed to automatically route your users to the optimal location depending on network conditions. As a result, the service offers low query latency for your end users, as well as low update latency for your DNS record management needs. ... which sounds exactly like what GeoDNS is intended to do, but geographically directing clients is something they explicitly don't support it yet. Ultimately I am looking for the two following features from a DNS provider: Map multiple IP addresses to a single domain name (like google.com, amazon.com, etc. does) Utilize a DNS service that will respond to client requests for that domain with the IP address of the nearest server to the requestee. As mentioned, it seems like this is all part of an "Anycast" DNS service (all of which these services are), but the features and marketing I see from them suggest otherwise, making me think I need to learn a bit more about how DNS works before making a deployment choice. Thanks in advance for any clarifications.

    Read the article

  • Trouble connecting a Ubuntu system to IPv6 tunnel over NAT

    - by John Millikin
    I'm trying to set up an IPv6 tunnel, via Hurricane Electric's tunnel-broker service. I've configured my system using their example commands: # $ipv4a = tunnel server's IPv4 IP # $ipv4b = user's IPv4 IP # $ipv6a = tunnel server's side of point-to-point /64 allocation # $ipv6b = user's side of point-to-point /64 allocation ip tunnel add he-ipv6 mode sit remote $ipv4a local $ipv4b ttl 255 ip link set he-ipv6 up ip addr add $ipv6b dev he-ipv6 ip route add ::/0 dev he-ipv6 And have configured my desktop to be in my NAT router's DMZ. The router is running Tomato firmware. But I can't ping any IPv6 services: $ ping6 -I he-ipv6 '2001:470:1f04:454::1' PING 2001:470:1f04:454::1(2001:470:1f04:454::1) from 2001:470:1f04:454::2 he-ipv6: 56 data bytes From 2001:470:1f04:454::2 icmp_seq=1 Destination unreachable: Address unreachable From 2001:470:1f04:454::2 icmp_seq=2 Destination unreachable: Address unreachable I can ping my local address: $ ping6 -I he-ipv6 '2001:470:1f04:454::2' PING 2001:470:1f04:454::2(2001:470:1f04:454::2) from 2001:470:1f04:454::2 he-ipv6: 56 data bytes 64 bytes from 2001:470:1f04:454::2: icmp_seq=1 ttl=64 time=0.037 ms 64 bytes from 2001:470:1f04:454::2: icmp_seq=2 ttl=64 time=0.039 ms I don't know much about routing, but results I found online suggested the output of ip -6 route and ip addr could be useful: $ ip -6 route 2001:470:1f04:454::/64 via :: dev he-ipv6 proto kernel metric 256 mtu 1480 advmss 1420 hoplimit 4294967295 fe80::/64 dev virbr0 proto kernel metric 256 mtu 1500 advmss 1440 hoplimit 4294967295 fe80::/64 dev eth1 proto kernel metric 256 mtu 1500 advmss 1440 hoplimit 4294967295 fe80::/64 via :: dev he-ipv6 proto kernel metric 256 mtu 1480 advmss 1420 hoplimit 4294967295 default dev he-ipv6 metric 1024 mtu 1480 advmss 1420 hoplimit 4294967295 $ ip addr 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 100 link/ether 00:1c:c0:a1:98:b2 brd ff:ff:ff:ff:ff:ff inet 192.168.1.10/24 brd 192.168.1.255 scope global eth1 inet6 fe80::21c:c0ff:fea1:98b2/64 scope link valid_lft forever preferred_lft forever 3: virbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN link/ether 36:4c:33:ab:0d:c6 brd ff:ff:ff:ff:ff:ff inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0 inet6 fe80::344c:33ff:feab:dc6/64 scope link valid_lft forever preferred_lft forever 4: vboxnet0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN qlen 1000 link/ether 00:76:62:6e:65:74 brd ff:ff:ff:ff:ff:ff 5: pan0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN link/ether 7e:29:5e:7c:ba:93 brd ff:ff:ff:ff:ff:ff 6: sit0: <NOARP> mtu 1480 qdisc noop state DOWN link/sit 0.0.0.0 brd 0.0.0.0 7: he-ipv6@NONE: <POINTOPOINT,NOARP,UP,LOWER_UP> mtu 1480 qdisc noqueue state UNKNOWN link/sit 24.130.225.239 peer 72.52.104.74 inet6 2001:470:1f04:454::2/64 scope global valid_lft forever preferred_lft forever inet6 fe80::1882:e1ef/128 scope link valid_lft forever preferred_lft forever

    Read the article

  • Random haproxy 408 errors

    - by Errol Fitzgerald
    I keep getting random 408 timeout messages from haproxy, but can't figure out if its a bug in 1.5-dev25 or something in my config file global log 127.0.0.1 local0 log 127.0.0.1 local1 info #log loghost local0 info maxconn 80000 #debug #quiet user haproxy group haproxy stats socket /tmp/haproxy.sock defaults log global mode http errorfile 503 /etc/haproxy/errors/503.http option httplog option dontlognull retries 3 option redispatch maxconn 80000 timeout client 12s timeout server 120s timeout queue 120s timeout connect 10s timeout http-request 30s option http-server-close timeout http-keep-alive 3000 option abortonclose option httpchk ... Does anyone see anything wrong with these config settings?

    Read the article

  • Temporarily Utilizing 304 Header on Apache for Crawlers

    - by Volomike
    I have a client who has a hosting arrangement with 400 customer sites all hosted through SuPHP in CGI mode on Apache. The sysop is now gone and the client is calling on me for rolling out a new PHP thing. Trouble is -- server load is very high right now and we have found that it's due to the crawlers. We had one customer in particular who complained of slow websites, and we engaged a 304 header plugin in his site against most crawlers, and his site perked right up. We'd like to lower that load by issuing a global 304 header to all the crawlers, letting human visitors through. I have a long list of user agent keywords to trap for. What's the best way to temporarily engage that global 304 header, while allowing human visitors to get right on through? I mean, I could roll out 400 .htaccess file changes, but it would be ideal to make this change in like one central Apache config and then it automatically affect all the sites at once.

    Read the article

  • Block Google requests to 16k using pf firewall

    - by atmosx
    I'd like to block access to Google search using PF after the threshold of 17500 requests (connection established) in 24h, from a host running FreeBSD 9. What I came up with, after reading pf-faq is this rule: pass out on $net proto tcp from any to 'www.google.com' port www flags S/SA keep state (max-src-conn 200, max-src-conn-rate 17500/86400) NOTE: 86400 are 24h in seconds. The rule should work, but PF is smart enough to know that www.google.com resolves in 5 different IPs. So my pfctl -sr output gives me this: pass out on vte0 inet proto tcp from any to 173.194.44.81 port = http flags S/SA keep state (source-track rule, max-src-conn 200, max-src-conn-rate 17500/86400, src.track 86400) pass out on vte0 inet proto tcp from any to 173.194.44.82 port = http flags S/SA keep state (source-track rule, max-src-conn 200, max-src-conn-rate 17500/86400, src.track 86400) pass out on vte0 inet proto tcp from any to 173.194.44.83 port = http flags S/SA keep state (source-track rule, max-src-conn 200, max-src-conn-rate 17500/86400, src.track 86400) pass out on vte0 inet proto tcp from any to 173.194.44.80 port = http flags S/SA keep state (source-track rule, max-src-conn 200, max-src-conn-rate 17500/86400, src.track 86400) pass out on vte0 inet proto tcp from any to 173.194.44.84 port = http flags S/SA keep state (source-track rule, max-src-conn 200, max-src-conn-rate 17500/86400, src.track 86400) PF creates 5 different rules, 1 for each IP that Google resolves. However I have the sense - without being 100% sure, I didn't had the chance to test it - that the number 17500/86400 applies for each IP. If that's the case - please confirm - then it's not what I want. In pf-faq there's another option called source-track-global: source-track This option enables the tracking of number of states created per source IP address. This option has two formats: + source-track rule - The maximum number of states created by this rule is limited by the rule's max-src-nodes and max-src-states options. Only state entries created by this particular rule count toward the rule's limits. + source-track global - The number of states created by all rules that use this option is limited. Each rule can specify different max-src-nodes and max-src-states options, however state entries created by any participating rule count towards each individual rule's limits. The total number of source IP addresses tracked globally can be controlled via the src-nodes runtime option. I tried to apply source-track-global in the above rule without success. How can I use this option in order to achieve my goal? Any thoughts or comments are more than welcome since I'm an amateur and don't fully understand PF yet. Thanks

    Read the article

  • Macos default paths prepend my defined paths in vim

    - by Bogdan Gusiev
    I am trying to call some shell command from vim with like :!ls command. But unfortunately there are some default PATHS that prepends PATHs defined in the original shell. Here is the echo $PATH output in the original shell: /usr/local/heroku/bin:/Users/bogdan/.rvm/gems/ruby-1.9.3-p194/bin:/Users/bogdan/.rvm/gems/ruby-1.9.3-p194@global/bin:/Users/bogdan/.rvm/rubies/ruby-1.9.3-p194/bin:/Users/bogdan/.rvm/bin:/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin:/usr/X11/bin:/Users/bogdan/.rvm/bin:/Users/bogdan/bin:/Users/bogdan/.rvm/bin:/usr/local/Cellar/git/1.7.12.2/libexec/git-core:/Users/bogdan/.rvm/bin and shell called within vim: /usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin:/usr/X11/bin:/Users/bogdan/.rvm/gems/ruby-1.9.3-p194@devauc/bin:/Users/bogdan/.rvm/gems/ruby-1.9.3-p194@global/bin:/Users/bogdan/.rvm/rubies/ruby-1.9.3-p194/bin:/Users/bogdan/.rvm/bin:/Users/bogdan/bin:/usr/local/Cellar/git/1.7.12.2/libexec/git-core:/Users/bogdan/.rvm/bin Why they appeared right there? How can I prevent that and make vim shell has original PATH variable.

    Read the article

< Previous Page | 58 59 60 61 62 63 64 65 66 67 68 69  | Next Page >