Search Results

Search found 190 results on 8 pages for 'typo'.

Page 6/8 | < Previous Page | 2 3 4 5 6 7 8  | Next Page >

  • Easy Made Easier - Networking

    - by dragonfly
        In my last post, I highlighted the feature of the Appliance Manager Configurator to auto-fill some fields based on previous field values, including host names based on System Name and sequential IP addresses from the first IP address entered. This can make configuration a little faster and a little less subject to data entry errors, particularly if you are doing the configuration on the Oracle Database Appliance itself.     The Oracle Database Appliance Appliance Manager Configurator is available for download here. But why would you download it, if it comes pre-installed on the Oracle Database Appliance? A common reason for customers interested in this new Engineered System is to get a good idea of how easy it is to configure. Beyond that, you can save the resulting configuration as a file, and use it on an Oracle Database Appliance. This allows you to verify the data entered in advance, and in the comfort of your office. In addition, the topic of this post is another strong reason to download and use the Appliance Manager Configurator prior to deploying your Oracle Database Appliance.     The most common source of hiccups in deploying an Oracle Database Appliance, based on my experiences with a variety of customers, involves the network configuration. It is during Step 11, when network validation occurs, that these come to light, which is almost half way through the 24 total steps, and can be frustrating, whether it was a typo, DNS mis-configuration or IP address already in use. This is why I recommend as a best practice taking advantage of the Appliance Manager Configurator prior to deploying an Oracle Database Appliance.     Why? Not only do you get the benefit of being able to double check your entries before you even start on the Oracle Database Appliance, you can also take advantage of the Network Validation step. This is the final step before you review all the data and can save it to a text file. It can be skipped, if you aren't ready or are not connected to the network that the Oracle Database Appliance will be on. My recommendation, though, is to run the Appliance Manager Configurator on your laptop, enter the data or re-load a previously saved file of the data, and then connect to the network that the Oracle Database Appliance will be on. Now run the Network Validation. It will check to make sure that the host names you entered are in DNS and do resolve to the IP addresses you specifiied. It will also ping the IP Addresses you specified, so that you can verify that no other machine is already using them (yes, that has happened at customer sites).     After you have completed the validation, as seen in the screen shot below, you can review the results and move on to saving your settings to a file for use on your Oracle Database Appliance, or if there are errors, you can use the Back button to return to the appropriate screen and correct the data. Once you are satisfied with the Network Validation, just check the Skip/Ignore Network Validation checkbox at the top of the screen, then click Next. Is the Network Validation in the Appliance Manager Configurator required? No, but it can save you time later. I should also note that the Network Validation screen is not part of the Appliance Manager Configurator that currently ships on the Oracle Database Appliance, so this is the easiest way to verify your network configuration.     I hope you are finding this series of posts useful. My next post will cover some aspects of the windowing environment that gets run by the 'startx' command on the Oracle Database Appliance, since this is needed to run the Appliance Manager Configurator via a direct connected monitor, keyboard and mouse, or via the ILOM. If it's been a while since you've used an OpenWindows environment, you'll want to check it out.

    Read the article

  • Undefined control sequence

    - by Jelle Fresen
    Hi, I am making my Master's Thesis with LaTeX, but I can't get the provided style to work. Specifically, I get the error 'Undefined control sequence' when using the function makeformaltitlepages, which is defined in mscthesis.sty. On the internet, the only answer I could find is the straightforward 'you probably made a typo', or 'you probably forgot to include the package', but I have reason to believe neither of those apply to me. I am quite sure that the function exists, for when I add a little verification, using the @ifundefined command, the logfile shows that the function actually does exist. And, as can be seen in the following piece of code, I also include the package: \usepackage{mscthesis} % setup information like author, company, title, etc. \begin{document} \formatmatter \thispagestyle{empty} \maketitle \makeatletter \@ifundefined{makeformaltitlepages}{\message{Function is not defined.}}{\message{Function is defined.}} \makeatother \makeformaltitlepages{\input{abstract}} % add chapters, sections, etc. and end the document Now, the output shows the line "Function is defined." just before the output of maketitle (which I think is rather strange on its own, but that might be a flushing issue), followed by the following infinitely repeated error (well, cut off after 100 times by LaTeX): Function is defined. // some gibberish about font info ! Undefined control sequence. \GenericError ... #4 \errhelp \@err@ ... l.112 \makeformaltitlepages{} The control sequence at the end of the top line of your error message was never \def'ed. If you have misspelled it (e.g., `\hobx'), type `I' and the correct spelling (e.g., `I\hbox'). Otherwise just continue, and I'll forget about whatever was undefined. While the error keeps repeating, the line that starts with '#4' cycles between the following four lines: #4 \errhelp \@err@ ... \let \@err@ ... \@empty \def \MessageBreak... \endgroup Ok, so, do any of you have a suggestion of how I might continue to hunt this bug? Or what blatantly obvious mistake did I make?

    Read the article

  • How to setup prawn on heroku when installed as a git submodule

    - by brad
    I have a rails app that I am trying to deploy to heroku. This app generates pdfs using prawn. I installed prawn as a git submodule rather than as a gem as this is what is recommended on the prawn website (here). This has not worked well with heroku so far though. As stated on heroku's application constraints page submodules are not supported so I followed their instructions to track the submodule in the main project and tried again. This has not worked and when I access my application I get the following error: App failed to start An error happened during the initialization of your app. This may be due to a typo, wrong number of arguments, or calling a function that doesn’t exists. Check the stack trace below for specific details. Make sure the app is working locally in production mode, by running it with RAILS_ENV (for Rails apps) or RACK_ENV (for Sinatra or other rack apps) set to production. e.g. RAILS_ENV=production script/server. Original Error /usr/local/lib/ruby/site_ruby/1.8/rubygems/custom_require.rb:31:in `gem_original_require': /disk1/home/slugs/208590_03c9c22_67f5/mnt/app/controllers/invoices_controller.rb:37: syntax error, unexpected ')' (SyntaxError) ).to_pdf(@invoice) (and then a whole lot more that I'll spare you from) The .to_pdf function described in the last line is called in a controller in exactly the way described in the prawn how-to that I linked to above so my interpretation of the error message is that prawn is not being installed/detected. Does anyone know how I can address this? I'm new to heroku so have little idea how to approach this. Is the submodule approach for prawn dead in the water from the get-go? Do I need to install it as a gem instead. I'd rather keep it as a submodule just because that works for now and I don't want to break it.

    Read the article

  • How reliable is Verify() in Moq?

    - by matthewayinde
    I'm only new to Unit Testing and ASP.NET MVC. I've been trying to get my head into both using Steve Sanderson's "Pro ASP.NET MVC Framework". In the book there is this piece of code: public class AdminController : Controller { ... [AcceptVerbs(HttpVerbs.Post)] public ActionResult Edit(Product product, HttpPostedFileBase image) { ... productsRepository.SaveProduct(product); TempData["message"] = product.Name + " has been saved."; return RedirectToAction("Index"); } } That he tests like so: [Test] public void Edit_Action_Saves_Product_To_Repository_And_Redirects_To_Index() { // Arrange AdminController controller = new AdminController(mockRepos.Object); Product newProduct = new Product(); // Act var result = (RedirectToRouteResult)controller.Edit(newProduct, null); // Assert: Saved product to repository and redirected mockRepos.Verify(x => x.SaveProduct(newProduct)); Assert.AreEqual("Index", result.RouteValues["action"]); } THE TEST PASSES. So I intensionally corrupt the code by adding "productsRepository.DeleteProduct(product);" after the "SaveProduct(product);" as in: ... productsRepository.SaveProduct(product); productsRepository.DeleteProduct(product); ... THE TEST PASSES.(i.e Condones a calamitous [hypnosis + intellisense]-induced typo :) ) Could this test be written better? Or is there something I should know? Thanks a lot.

    Read the article

  • Why does Hibernate ignore the JPA2 standardized properties in my persistence.xml?

    - by Ophidian
    I have an extremely simple web application running in Tomcat using Spring 3.0.1, Hibernate 3.5.1, JPA 2, and Derby. I am defining all of my database connectivity in persistence.xml and merely using Spring for dependency injection. I am using embedded Derby as my database. Everything works correctly when I define the driver and url properties in persistence.xml in the classic Hibernate manner as thus: <property name="hibernate.connection.driver_class" value="org.apache.derby.jdbc.EmbeddedDriver"/> <property name="hibernate.connection.url" value="jdbc:derby:webdb;create=true"/> The problems occur when I switch my configuration to the JPA2 standardized properties as thus: <property name="javax.persistence.jdbc.driver" value="org.apache.derby.jdbc.EmbeddedDriver"/> <property name="javax.persistence.jdbc.url" value="jdbc:derby:webdb;create=true"/> When using the JPA2 property keys, the application bails hard with the following exception: java.lang.UnsupportedOperationException: The user must supply a JDBC connection Does anyone know why this is failing? NOTE: I have copied the javax... property strings straight from the Hibernate reference documentation, so a typo is extremely unlikely.

    Read the article

  • Listen to double click not click

    - by Mohsen
    I'm just wondering why click event happening when I dbclick an element? I have this code:(JSBIN) HTML <p id="hello">Hello World</p> JavaScript document.getElementById('hello').addEventListener('click', function(e){ e.preventDefault(); this.style.background = 'red'; }, false); document.getElementById('hello').addEventListener('dbclick', function(){ this.style.background = 'yellow'; }, false); It should do different things for click and double click, but it seems when you double click on the p it catch click event in advance and ignore double click. I tried preventDefault the click event too. How can I listen to just dbclick? UPDATE I had a typo in my code. dbclick is wrong. It's dblclick. Anyway the problem still exist. When user double clicks the click event happens. This is updated code that prove it:(JSBin) document.getElementById('hello').addEventListener('click', function(e){ e.preventDefault(); this.style.background = 'red'; this.innerText = "Hello World clicked"; }, false); document.getElementById('hello').addEventListener('dblclick', function(){ this.style.background = 'green'; }, false);

    Read the article

  • Parse usable Street Address, City, State, Zip from a string

    - by Rob Allen
    Problem: I have an address field from an Access database which has been converted to Sql Server 2005. This field has everything all in one field. I need to parse out the individual sections of the address into their appropriate fields in a normalized table. I need to do this for approximately 4,000 records and it needs to be repeatable. Here are the rules for this exercise: 1 - no whining about how this should have been separate fields in the first place, we are often confronted with less than ideal situations and have to make the best of them 2- for this post, use any language you want 3- feel free to play code golf 4 - Assume an address in the US (for now) 5 - assume that the input string will sometimes contain an addressee (the person being addressed) and/or a second street address (i.e. Suite B) 6 - states may be abbreviated 7 - zip code could be standard 5 digit or zip+4 8 - there are typos in some instances UPDATE: In response to the questions posed, standards were not universally followed, I need need to store the individual values, not just geocode and errors means typo (corrected above) Sample Data: A. P. Croll & Son 2299 Lewes-Georgetown Hwy, Georgetown, DE 19947 11522 Shawnee Road, Greenwood DE 19950 144 Kings Highway, S.W. Dover, DE 19901 Intergrated Const. Services 2 Penns Way Suite 405 New Castle, DE 19720 Humes Realty 33 Bridle Ridge Court, Lewes, DE 19958 Nichols Excavation 2742 Pulaski Hwy Newark, DE 19711 2284 Bryn Zion Road, Smyrna, DE 19904 VEI Dover Crossroads, LLC 1500 Serpentine Road, Suite 100 Baltimore MD 21 580 North Dupont Highway Dover, DE 19901 P.O. Box 778 Dover, DE 19903

    Read the article

  • NHibernate listener/event to replace object before insert/update

    - by vIceBerg
    Hi! I have a Company class which have a collection of Address. Here's my Address class:(written top of my head): public class Address { public string Civic; public string Street; public City City; } This is the City class: public class City { public int Id; public string Name; public string SearchableName{ get { //code } } } Address is persisted in his own table and have a reference to the city's ID. City are also persisted in is own table. The City's SearchableName is used to prevent some mistakes in the city the user type. For example, if the city name is "Montréal", the searchable name will be "montreal". If the city name is "St. John", the searchable name will be "stjohn", etc. It's used mainly to to search and to prevent having multiple cities with a typo in it. When I save an address, I want an listener/event to check if the city is already in the database. If so, cancel the insert and replace the user's city with the database one. I would like the same behavior with updates. I tried this: public bool OnPreInsert(PreInsertEvent @event) { City entity = (@event.Entity as City); if (entity != null) { if (entity.Id == 0) { var city = (from c in @event.Session.Linq<City>() where c.SearchableName == entity.SearchableName select c).SingleOrDefault(); if (city != null) { //don't know what to do here return true; } } } return false; } But if there's already a City in the database, I don't know what to do. @event.Entity is readonly, if I set @event.Entity.Id, I get an "null identifier" exception. I tried to trap insert/update on Address and on Company, but the City if the first one to get inserted (it's logic...) Any thoughts? Thanks

    Read the article

  • PHP OOP singleton doesn't return object

    - by Misiur
    Weird trouble. I've used singleton multiple times but this particular case just doesn't want to work. Dump says that instance is null. define('ROOT', "/"); define('INC', 'includes/'); define('CLS', 'classes/'); require_once(CLS.'Core/Core.class.php'); $core = Core::getInstance(); var_dump($core->instance); $core->settings(INC.'config.php'); $core->go(); Core class class Core { static $instance; public $db; public $created = false; private function __construct() { $this->created = true; } static function getInstance() { if(!self::$instance) { self::$instance = new Core(); } else { return self::$instance; } } public function settings($path = null) { ... } public function go() { ... } } Error code Fatal error: Call to a member function settings() on a non-object in path It's possibly some stupid typo, but I don't have any errors in my editor. Thanks for the fast responses as always.

    Read the article

  • Snapshot agent obliterates conflicts

    - by mwolfe02
    We are using merge replication in SQL Server 2000. We have a snapshot agent that runs every night that updates the publication snapshot. About six months ago we updated from SQL Server 7.0 to 2000 (that's not a typo). We noticed a sharp decline in conflicts at that time but could not track down the reason. We finally found that the daily snapshot agent is recreating the conflict tables every night. This seems to be a change in functionality from SQL Server 7.0. We were running the snapshot agent before and the conflicts would accumulate. Is there some way to prevent the data in the conflict tables from being lost when the snapshot runs? Can anyone confirm a change in behavior between 7.0 and 2000? Our current plan is to simply stop automatically updating the publication snapshot. Is that a reasonable workaround? Here is the line from the script that is adding the snapshot: exec sp_addpublication_snapshot @publication = N'MyPub' , @frequency_type = 4 , @frequency_interval = 1 , @frequency_relative_interval = 1 , @frequency_recurrence_factor = 0 , @frequency_subday = 1 , @frequency_subday_interval = 5 , @active_start_date = 0 , @active_end_date = 0 , @active_start_time_of_day = 500 , @active_end_time_of_day = 235959 Here is the step that runs in the agent job: Step Name: Run agent. Type: Replication Snapshot Command: -Publisher [WCDBS02] -PublisherDB [TaxDB] -Distributor [WCDBS02] -Publication [TaxDB] -ReplicationType 2 -DistributorSecurityMode 1 This appears to be running the Replication Snapshot Agent Utility. There is no mention on that link about dropping and recreating system conflict tables, nor is there any flag that can be set to alter this behavior.

    Read the article

  • Constructor initializer list: code from the C++ Primer, chapter 16

    - by Alexandros Gezerlis
    Toward the end of Chapter 16 of the "C++ Primer" I encountered the following code (I've removed a bunch of lines): class Sales_item { public: // default constructor: unbound handle Sales_item(): h() { } private: Handle<Item_base> h; // use-counted handle }; My problem is with the Sales_item(): h() { } line. For the sake of completeness, let me also quote the parts of the Handle class template that I think are relevant to my question (I think I don't need to show the Item_base class): template <class T> class Handle { public: // unbound handle Handle(T *p = 0): ptr(p), use(new size_t(1)) { } private: T* ptr; // shared object size_t *use; // count of how many Handles point to *ptr }; I would have expected something like either: a) Sales_item(): h(0) { } which is a convention the authors have used repeatedly in earlier chapters, or b) Handle<Item_base>() if the intention was to invoke the default constructor of the Handle class. Instead, what the book has is Sales_item(): h() { }. My gut reaction is that this is a typo, since h() looks suspiciously similar to a function declaration. On the other hand, I just tried compiling under g++ and running the example code that uses this class and it seems to be working correctly. Any thoughts?

    Read the article

  • Scope of the c++ using directive

    - by ThomasMcLeod
    From section 7.3.4.2 of the c++11 standard: A using-directive specifies that the names in the nominated namespace can be used in the scope in which the using-directive appears after the using-directive. During unqualified name lookup (3.4.1), the names appear as if they were declared in the nearest enclosing namespace which contains both the using-directive and the nominated namespace. [ Note: In this context, “contains” means “contains directly or indirectly”. —end note ] What do the second and third sentences mean exactly? Please give example. Here is the code I am attempting to understand: namespace A { int i = 7; } namespace B { using namespace A; int i = i + 11; } int main(int argc, char * argv[]) { std::cout << A::i << " " << B::i << std::endl; return 0; } It print "7 7" and not "7 18" as I would expect. Sorry for the typo, the program actually prints "7 11".

    Read the article

  • C++ arrays as parameters, EDIT: now includes variable scoping

    - by awshepard
    Alright, I'm guessing this is an easy question, so I'll take the knocks, but I'm not finding what I need on google or SO. I'd like to create an array in one place, and populate it inside a different function. I define a function: void someFunction(double results[]) { for (int i = 0; i<100; ++i) { for (int n = 0; n<16; ++n) //note this iteration limit { results[n] += i * n; } } } That's an approximation to what my code is doing, but regardless, shouldn't be running into any overflow or out of bounds issues or anything. I generate an array: double result[16]; for(int i = 0; i<16; i++) { result[i] = -1; } then I want to pass it to someFunction someFunction(result); When I set breakpoints and step through the code, upon entering someFunction, results is set to the same address as result, and the value there is -1.000000 as expected. However, when I start iterating through the loop, results[n] doesn't seem to resolve to *(results+n) or *(results+n*sizeof(double)), it just seems to resolve to *(results). What I end up with is that instead of populating my result array, I just get one value. What am I doing wrong? EDIT Oh fun, I have a typo: it wasn't void someFunction(double results[]). It was: void someFunction(double result[])... So perhaps this is turning into a scoping question. If my double result[16] array is defined in a main.cpp, and someFunction is defined in a Utils.h file that's included by the main.cpp, does the result variable in someFunction then wreak havoc on the result array in main?

    Read the article

  • Interpretation of int (*a)[3]

    - by kapuzineralex
    When working with arrays and pointers in C, one quickly discovers that they are by no means equivalent although it might seem so at a first glance. I know about the differences in L-values and R-values. Still, recently I tried to find out the type of a pointer that I could use in conjunction with a two-dimensional array, i.e. int foo[2][3]; int (*a)[3] = foo; However, I just can't find out how the compiler "understands" the type definition of a in spite of the regular operator precedence rules for * and []. If instead I were to use a typedef, the problem becomes significantly simpler: int foo[2][3]; typedef int my_t[3]; my_t *a = foo; At the bottom line, can someone answer me the questions as to how the term int (*a)[3] is read by the compiler? int a[3] is simple, int *a[3] is simple as well. But then, why is it not int *(a[3])? EDIT: Of course, instead of "typecast" I meant "typedef" (it was just a typo).

    Read the article

  • IntelliJ inspection -- non-thrown exception

    - by skiaddict1
    This is a follow-up question to 1832203. I'm making it a new question as well, because it seems that posting an answer to a question doesn't change its position on the java page and so I'm worried that it won't get seen. Apologies if I've just stepped on some etiquette toes. I'm an IntelliJ newbie -- started using it two days ago and I'm absolutely head-over-heels in love! One of the things I adore is the code inspections. However... In one of my classes I often create exceptions without throwing them. If I can't turn off (or downgrade) the inspection warning for this then I can see I'm going to end up ignoring inspections on at least that file (if not the entire project), which would be a real pity. I've done a search in the inspection settings for "exception", and found nothing that relates exactly, so I turned them all off just to see, and it's still doing it (even after a rebuild...BTW when are inspections redone? at save? at rebuild? ???), so I would really like some help on how to make this one into an info/typo level -- which I can then ignore. Using the free version, if that makes any difference TIA to all those experienced IntelliJ warriors out there!

    Read the article

  • Google Doclist API : bug in revision history?

    - by Jerbome
    I'm trying to manage revisions of files (not documents) stored in google docs programatically using gdata document list java library v3. I can create files and revisions using this tool : I can see them in the web UI. The thing is : the content of my revisions seems wrong. Here is my test protocol : I create a plain text file with "Hello World" in it. I upload it to gdocs without converting it. I create a revision of this file, its content changing to "Content of the second version" I create another revision, its content is now "Content of the third version" At each step, I check the content of each revision, using my app AND using the web UI. Here is what I get : First step : no problem i see one version containing the "Hello world" text Second step : no problem either, i see 2 versions, containing Hello World for the first one and Content of the second version for the second. Third step : here the problems comes. I see my 3 versions, but only the third and last seems to be correct. when i download the second version, the content is "Content of the second versio" (not a typo, it misses the 'n'). And i cannot even download the initial version, it seems to timeout. Important thing : I did not have this problem three weeks ago, my revision management worked well. I have no idea of what happens there, except it seems to be server-related, as the problem is seen either with my app or the google native webapp. Last thing : I tried using the google drive API as gdocs had been merged with drive. When i request revisions of my file, the API returns me an error saying that revisions are not supported for files, even if i can see them in the UI. I tried on converted documents, it worked. I'm looking for a workaround for this problem. Has anyone ever encountered such a problem ? Thanks in advance, Jérôme

    Read the article

  • Problem with overlapping divs - CSS

    - by Chaitanya
    html, body {margin: 0px; padding: 0px;} #pageContainer{ margin: auto; padding: auto;} #contentContainer{ margin:150px; width:1100px; height: 100%; overflow: hidden; } #leftContainer{ width: 80%; min-height: 800px; background: #009900; float:left;} #left1{ margin:80px 0 0 80px; height: 550px; top:0px; z-index:1; background: #000000;} #left2{ margin:80px 0 0 80px; height: 500px; top:110px; z-index:2000; background:#99FFFF; } #rightContainer{ width: 20%; height: inherit; background: black; float:right;} / CSS Document */ I require 2 overlapping divs, which look like the one below. ---------------------- | | | ------------------ | | ' '| | ' '| | ' '| | ' '| | ' '| | ' '| | ' '| ---------------------- ' ' ' ' ------------------- <div id="pageContainer"> <div id="contentContainer"> <div id="leftContainer"> Am the left container <div id="left1"> <div id="left2"> </div> </div> </div> <div id="rightContainer"> </div> </div> </div> The problem is am failing to get the overlap. Where am I going wrong? Edit 1: topx was a typo, corrected it.

    Read the article

  • Windows Phone 7.1.1 Ping

    - by Matt
    I am writing an app to connect to a third party application using REST web services. I have a configuration page that asks for an IP, Port, User Name & Password, currently it just blindly assumes you enter the correct details and attempts a connection. I want to create a test routine that goes through and checks off the following steps when setting up the config information Is the IP/Hostname correct (using ping or something) Is the Port correct Is the Username & Password correct then displays the results on screen as it's going so that if it can't connect to the service it's easier to identify where the issue is. To achieve step 1 I would like to use Ping or some equivalent that does not rely on a particular port being open. So I can eliminate dodgy DNS or a typo in the IP/Hostname. I understand from previous questions asked that ping wasn't possible in 7.0 but with Mango the sockets classes have been added in, is it possible now, if so how? If it still isn't possible is there a different way I can achieve step 1?

    Read the article

  • Ways to prevent multiple email notification with php&mysql

    - by Louis Loudog Trottier
    My 1st post, but i got many great answers and tips from stackoverflow so far. This one was a close call- How does facebook, gmail send the real time notification? but not exactly, so let brainstorm this together. I have CMS system with mail notification when a change is made on the site. Everything wotk very well but i want to prevent multiple notification if somene make another quick change to, let say, fix a typo. Using php mail(), obviously. I've tough of 2 ways, one simple, and one.... let just say, pretty heavy... cough. the 3rd one was inspired by Implementing Email Notification but really doesn't look appealing to me to send bunch of email at once. Use a timestamp to check if another change was made in the 'let say' last 5 minutes. Record the last change, and compare it to the new one. Could be usefull for backup at the same time since i'll have to save the change somewhere, but text can be long and making an sql search would be painfull. Wouldn't it? Use cron to send changes every x minutes... convince me if you ythink it is a suitable solution. Any ideas, comment or suggestion of your own? Looking forward for your inputs, and since i now registered, i'll do my best to help around. Cheers, all llt

    Read the article

  • Reliable file copy (move) process - mostly Unix/Linux

    - by mfinni
    Short story : We have a need for a rock-solid reliable file mover process. We have source directories that are often being written to that we need to move files from. The files come in pairs - a big binary, and a small XML index. We get a CTL file that defines these file bundles. There is a process that operates on the files once they are in the destination directory; that gets rid of them when it's done. Would rsync do the best job, or do we need to get more complex? Long story as follows : We have multiple sources to pull from : one set of directories are on a Windows machine (that does have Cygwin and an SSH daemon), and a whole pile of directories are on a set of SFTP servers (Most of these are also Windows.) Our destinations are a list of directories on AIX servers. We used to use a very reliable Perl script on the Windows/Cygwin machine when it was our only source. However, we're working on getting rid of that machine, and there are other sources now, the SFTP servers, that we cannot presently run our own scripts on. For security reasons, we can't run the copy jobs on our AIX servers - they have no access to the source servers. We currently have a homegrown Java program on a Linux machine that uses SFTP to pull from the various new SFTP source directories, copies to a local tmp directory, verifies that everything is present, then copies that to the AIX machines, and then deletes the files from the source. However, we're finding any number of bugs or poorly-handled error checking. None of us are Java experts, so fixing/improving this may be difficult. Concerns for us are: With a remote source (SFTP), will rsync leave alone any file still being written? Some of these files are large. From reading the docs, it seems like rysnc will be very good about not removing the source until the destination is reliably written. Does anyone have experience confirming or disproving this? Additional info We will be concerned about the ingestion process that operates on the files once they are in the destination directory. We don't want it operating on files while we are in the process of copying them; it waits until the small XML index file is present. Our current copy job are supposed to copy the XML file last. Sometimes the network has problems, sometimes the SFTP source servers crap out on us. Sometimes we typo the config files and a destination directory doesn't exist. We never want to lose a file due to this sort of error. We need good logs If you were presented with this, would you just script up some rsync? Or would you build or buy a tool, and if so, what would it be (or what technologies would it use?) I (and others on my team) are decent with Perl.

    Read the article

  • Configuring Novel iPrint client on ubuntu 13.10

    - by Mahdi Sadeghi
    Recently I have struggled a lot to make Novel iPrint client to work on my laptop. I need it to use Follow Me printers in our university(you can take your print form any printer). Using this tutorial from Novel, I tried to convert the rpm package and install it on Ubuntu 13.04 & 13.10. The post install script from installing generated deb package had a typo which I saw in post install messages and I fixed that. Now I have the client running. To see the client UI I installed cinnamon desktop(because unity does not have system tray and old solutions did'nt work to whitelist Novel clinet). I have iPrint plugin installed on firefox as well(I copied the shared object files to plugin directories). I try installing printers from provided ipp URL(which lists available printers on the server) with no success. After clicking the printer name I see this: I have various errors: Formerly firefox used to asked my network username/password for installing SSL printer but now it returns this: iPrint Printer - The printer is currently not available. However I can install non-SSL version but the printer location is either empty or points to: file:///dev/null even if I change it to the exact address which I see on working machines still it prints nothing. I have tried the novel command line tool, iprntcmd to print. It is being installed at: /opt/novell/iprint/bin/ msadeghi@werkstatt:/opt/novell/iprint/bin$ ./iprntcmd --addprinter ipp://iprint.rz.hs-offenburg.de/ipp/Follow-me\ -\ IPP iprntcmd v05.04.00 Adding printer ipp://iprint.rz.hs-offenburg.de/ipp/Follow-me - IPP. Added printer ipp://iprint.rz.hs-offenburg.de/ipp/Follow-me - IPP successfully. It adds the printer with empty location and again no print. What I found interesting is the log file at ~/.iprint/errors.txt with strange errors which I hope somebody here can understand. When I try to install the SSL printer I receive these logs(note that HP is my local printer and has nothing to do with iprint): Thu Oct 31 11:02:03 2013 Trace Info: iprint.c, line 6690 Group Info: IPRINT-lib Error Code: 4096 (0x1000) User ID: 1000 Error Msg: iPrint Lib - Bad URI type supplied (not IPP:, HTTP:, or HTTPS:). Debug Msg: IPRINTInterpretURI for file:///dev/null - Unknown Port Type - file Thu Oct 31 11:02:03 2013 Trace Info: iprint.c, line 6800 Group Info: IPRINT-lib Error Code: 4096 (0x1000) User ID: 1000 Error Msg: iPrint Lib - Bad URI type supplied (not IPP:, HTTP:, or HTTPS:). Debug Msg: IPRINTInterpretURI for hp:/usb/HP_LaserJet_1018?serial=KP103A1 - No Port type specified Thu Oct 31 11:02:05 2013 Trace Info: iprint.c, line 6690 Group Info: IPRINT-lib Error Code: 4096 (0x1000) User ID: 1000 Error Msg: iPrint Lib - Bad URI type supplied (not IPP:, HTTP:, or HTTPS:). Debug Msg: IPRINTInterpretURI for file:///dev/null - Unknown Port Type - file Thu Oct 31 11:02:05 2013 Trace Info: iprint.c, line 6800 Group Info: IPRINT-lib Error Code: 4096 (0x1000) User ID: 1000 Error Msg: iPrint Lib - Bad URI type supplied (not IPP:, HTTP:, or HTTPS:). Debug Msg: IPRINTInterpretURI for hp:/usb/HP_LaserJet_1018?serial=KP103A1 - No Port type specified Thu Oct 31 11:02:06 2013 Trace Info: mydoreq.c, line 676 Group Info: CLIB Error Code: 0 (0x0) User ID: 1000 Error Msg: Success Debug Msg: MyCupsDoFileRequest - httpReconnect failed (0) Thu Oct 31 11:02:06 2013 Trace Info: mydoreq.c, line 1293 Group Info: CUPS-IPP Error Code: 1282 (0x502) User ID: 1000 Error Msg: iPrint Printer - The printer is currently not available. Debug Msg: MyCupsDoFileRequest - IPP SERVICE UNAVAILABLE Thu Oct 31 11:02:06 2013 Trace Info: iprint.c, line 6690 Group Info: IPRINT-lib Error Code: 4096 (0x1000) User ID: 1000 Error Msg: iPrint Lib - Bad URI type supplied (not IPP:, HTTP:, or HTTPS:). Debug Msg: IPRINTInterpretURI for file:///dev/null - Unknown Port Type - file Thu Oct 31 11:02:06 2013 Trace Info: iprint.c, line 6800 Group Info: IPRINT-lib Error Code: 4096 (0x1000) User ID: 1000 Error Msg: iPrint Lib - Bad URI type supplied (not IPP:, HTTP:, or HTTPS:). Debug Msg: IPRINTInterpretURI for hp:/usb/HP_LaserJet_1018?serial=KP103A1 - No Port type specified Thu Oct 31 11:02:08 2013 Trace Info: iprint.c, line 6690 Group Info: IPRINT-lib Error Code: 4096 (0x1000) User ID: 1000 Error Msg: iPrint Lib - Bad URI type supplied (not IPP:, HTTP:, or HTTPS:). Debug Msg: IPRINTInterpretURI for file:///dev/null - Unknown Port Type - file Thu Oct 31 11:02:08 2013 Trace Info: iprint.c, line 6800 Group Info: IPRINT-lib Error Code: 4096 (0x1000) User ID: 1000 Error Msg: iPrint Lib - Bad URI type supplied (not IPP:, HTTP:, or HTTPS:). Debug Msg: IPRINTInterpretURI for hp:/usb/HP_LaserJet_1018?serial=KP103A1 - No Port type specified I should say that my friend can print using the same instructions on CrunchBang easily and another guy on 12.04 LTS but with more struggling. It worked for me on linux mint maya with my old laptop as well. Is there anybody out there who can help me to solve these problems? I am really disappointed with Novell and our university support. PS. I had the same problemwith 13.04. No matter if I am within the network or I connect with VPN, I have the same issues.

    Read the article

  • Video games, content strategy, and failure - oh my.

    - by Roger Hart
    Last night was the CS London group's event Content Strategy, Manhattan Style. Yes, it's a terrible title, feeling like a self-conscious grasp for chic, sadly commensurate with the venue. Fortunately, this was not commensurate with the event itself, which was lively, relevant, and engaging. Although mostly if you're a consultant. This is a strong strain in current content strategy discourse, and I think we're going to see it remedied quite soon. Not least in Paris on Friday. A lot of the bloggers, speakers, and commentators in the sphere are consultants, or part of agencies and other consulting organisations. A lot of the talk is about how you sell content strategy to your clients. This is completely acceptable. Of course it is. And it's actually useful if that's something you regularly have to do. To an extent, it's even portable to those of us who have to sell content strategy within an organisation. We're still competing for credibility and resource. What we're doing less is living in the beginning of a project. This was touched on by Jeffrey MacIntyre (albeit in a your-clients kind of a way) who described "the day two problem". Companies, he suggested, build websites for launch day, and forget about the need for them to be ongoing entities. Consultants, agencies, or even internal folks on short projects will live through Day Two quite often: the trainwreck moment where somebody realises that even if the content is right (which it often isn't), and on time (which it often isn't), it'll be redundant, outdated, or inaccurate by the end of the week/month/fickle social media attention cycle. The thing about living through a lot of Day Two is that you see a lot of failure. Nothing succeeds like failure? Failure is good. When it's structured right, it's an awesome tool for learning - that's kind of how video games work. I'm chewing over a whole blog post about this, but basically in game-like learning, you try, fail, go round the loop again. Success eventually yields joy. It's a relatively well-known phenomenon. It works best when that failing step is acutely felt, but extremely inexpensive. Dying in Portal is highly frustrating and surprisingly characterful, but the save-points are well designed and the reload unintrusive. The barrier to re-entry into the loop is very low, as is the cost of your failure out in meatspace. So it's easy (and fun) to learn. Yeah, spot the difference with business failure. As an external content strategist, you get to rock up with a big old folder full of other companies' Day Two (and ongoing day two hundred) failures. You can't send the client round the learning loop - although you may well be there because they've been round it once - but you can show other people's round trip. It's not as compelling, but it's not bad. What about internal content strategists? We can still point to things that are wrong, and there are some very compelling tools at our disposal - content inventories, user testing, and analytics, for instance. But if we're picking up big organically sprawling legacy content, Day Two may well be a distant memory, and the felt experience of web content failure is unlikely to be immediate to many people in the organisation. What to do? My hunch here is that the first task is to create something immediate and felt, but that it probably needs to be a success. Something quickly doable and visible - a content problem solved with a measurable business result. Now, that's a tall order; but scrape of the "quickly" and it's the whole reason we're here. At Red Gate, I've started with the text book fear and passion introduction to content strategy. In fact, I just typo'd that as "contempt strategy", and it isn't a bad description. Yelling "look at this, our website is rubbish!" gets you the initial attention, but it doesn't make you many friends. And if you don't produce something pretty sharp-ish, it's easy to lose the momentum you built up for change. The first thing I've done - after the visual content inventory - is to delete a bunch of stuff. About 70% of the SQL Compare web content has gone, in fact. This is a really, really cheap operation. It's visible, and it's powerful. It's cheap because you don't have to create any new content. It's not free, however, because you do have to validate your deletions. This means analytics, actually reading that content, and talking to people whose business purposes that content has to serve. If nobody outside the company uses it, and nobody inside the company thinks they ought to, that's a no-brainer for the delete list. The payoff here is twofold. There's the nebulous hard-to-illustrate "bad content does user experience and brand damage" argument; and there's the "nobody has to spend time (money) maintaining this now" argument. One or both are easily felt, and the second at least should be measurable. But that's just one approach, and I'd be interested to hear from any other internal content strategy folks about how they get buy-in, maintain momentum, and generally get things done.

    Read the article

  • jQuery: how can I clear content without getting the dreaded "stop running this script?" dialog?

    - by Cheeso
    I have a div, that holds a div. like this: <div id='reportHolder' class='column'> <div id='report'> </div> </div> Within the inner div, I add a bunch (7-12) of pairs of a and div elements, like this: <h4><a>Heading1</a></h4> <div> ...content here....</div> The total size of the content, is maybe 200k. Each div just contains a fragment of HTML. After I add all the content, I then create an accordion. like this: $('#report').accordion({collapsible:true, active:false}); This all works fine. The problem is, when I try to clear or remove the report div, it takes a looooooong time, and I get 3 or 4 popups asking "Do you want to stop running this script?" I have tried several ways: option 1: $('#report').accordion('destroy'); $('#report').remove(); $("#reportHolder").html("<div id='report'> </div>"); option 2: $('#report').accordion('destroy'); $('#report').html(''); $("#reportHolder").html("<div id='report'> </div>"); option 3: $('#report').accordion('destroy'); $("#reportHolder").html("<div id='report'> </div>"); No matter what, it hangs for a long while. The call to accordion('destroy') seems to not be the source of the delay. It's the erasure of the html content within the report div. EDIT - fixed typo. ps: this happens on FF3.5 as well as IE8 . Questions: What is taking so long? How can I remove content more quickly?

    Read the article

  • Simple JQuery Validator addMethod not working

    - by tehaaron
    Updated question on the bottom I am trying to validate a super simple form. Eventually the username will be compared to a RegExp statement and the same will go for the password. However right now I am just trying to learn the Validator addMethod format. I currently have this script: JQuery.validator.addMethod( "legalName", function(value, element) { if (element.value == "bob") { return false; } else return true; }, "Use a valid username." ); $(document).ready(function() { $("#form1").validate({ rules: { username: { legalName: true } }, }); }); Which if I am not mistaken should return false and respond with "Use a valid username." if I were to put "bob" into the form. However, it is simply submitting it. I am linking to JQuery BEFORE Validator in the header like instructed. My uber simple form looks like this: <form id="form1" method="post" action=""> <div class="form-row"><span class="label">Username *</span><input type="text" name="username" /></div> <div class="form-row"><input class="submit" type="submit" value="Submit"></div> </form> Finally how would I go about restructing the addMethod function to return true if and false at the else stage while keeping the message alert for a false return? (ignore this last part if you don't understand what I was trying to say :) ) Thanks in advance. Thank to everyone who pointed out my JQuery - jQuery typo. New Ideally, I am trying to turn this into a simple login form (username/password). It is for demonstration only so it wont have a database attached or anything, just some simple js validations. I am looking to make the username validate for <48 characters, only english letters and numbers, no special characters. I thought a whitelist would be easiest so I had something like this: ^[a-zA-Z0-9]*${1,48} but I am not sure if that is proper JS RegExp (it varies from Ruby RegExp if I am not mistaken?...Usually I use rubular.com). Password will be similar but require some upper/lowercase and numbers. I believe I need to make another $.validator.addMethod for legalPassword that will look very similar.

    Read the article

  • Return pre-UPDATE column values in PostgreSQL without using triggers, functions or other "magic"

    - by Python Larry
    I have a related question, but this is another part of MY puzzle. I would like to get the OLD VALUE of a Column from a Row that was UPDATEd... WITHOUT using Triggers (nor Stored Procedures, nor any other extra, non-SQL/-query entities). The query I have is like this: UPDATE my_table SET processing_by = our_id_info -- unique to this instance WHERE trans_nbr IN ( SELECT trans_nbr FROM my_table GROUP BY trans_nbr HAVING COUNT(trans_nbr) > 1 LIMIT our_limit_to_have_single_process_grab ) RETURNING row_id If I could do "FOR UPDATE ON my_table" at the end of the subquery, that'd be devine (and fix my other question/problem). But, that won't work: can't have this AND a "GROUP BY" (which is necessary for figuring out the COUNT of trans_nbr's). Then I could just take those trans_nbr's and do a query first to get the (soon-to-be-) former processing_by values. I've tried doing like: UPDATE my_table SET processing_by = our_id_info -- unique to this instance FROM my_table old_my_table JOIN ( SELECT trans_nbr FROM my_table GROUP BY trans_nbr HAVING COUNT(trans_nbr) > 1 LIMIT our_limit_to_have_single_process_grab ) sub_my_table ON old_my_table.trans_nbr = sub_my_table.trans_nbr WHERE my_table.trans_nbr = sub_my_table.trans_nbr AND my_table.processing_by = old_my_table.processing_by RETURNING my_table.row_id, my_table.processing_by, old_my_table.processing_by But that can't work; "old_my_table" is not viewable outside of the join; the RETURNING clause is blind to it. I've long since lost count of all the attempts I've made; I have been researching this for literally hours. If I could just find a bullet-proof way to lock the rows in my subquery - and ONLY those rows, and WHEN the subquery happens - all the concurrency issues I'm trying to avoid disappear... UPDATE: [WIPES EGG OFF FACE] Okay, so I had a typo in the non-generic code of the above that I wrote "doesn't work"; it does... thanks to Erwin Brandstetter, below, who stated it would, I re-did it (after a night's sleep, refreshed eyes, and a banana for bfast). Since it took me so long/hard to find this sort of solution, perhaps my embarrassment is worth it? At least this is on SO for posterity now... : What I now have (that works) is like this: UPDATE my_table SET processing_by = our_id_info -- unique to this instance FROM my_table AS old_my_table WHERE trans_nbr IN ( SELECT trans_nbr FROM my_table GROUP BY trans_nbr HAVING COUNT(*) > 1 LIMIT our_limit_to_have_single_process_grab ) AND my_table.row_id = old_my_table.row_id RETURNING my_table.row_id, my_table.processing_by, old_my_table.processing_by AS old_processing_by The COUNT(*) is per a suggestion from Flimzy in a comment on my other (linked above) question. (I was more specific than necessary. [In this instance.])

    Read the article

< Previous Page | 2 3 4 5 6 7 8  | Next Page >