Search Results

Search found 8725 results on 349 pages for 'beginning steps'.

Page 306/349 | < Previous Page | 302 303 304 305 306 307 308 309 310 311 312 313  | Next Page >

  • Adding the text area input to a list with jquery

    - by amylynn83
    I am making a shopping list app. When an item is added to the text input, it should be converted into a list item with a class of "tile" and added to the beginning of the list with a class of ".shopping_item_list". Here is my HTML: <section> <h1 class="outline">Shopping List Items</h1> <ul class="shopping_item_list"> <li class="tile">Flowers<span class="delete">Delete</span></li> </ul> </section> My jQuery: $("input[name='shopping_item']").change(function() { var item = $(this).val(); $("<li class='tile'>+ item +</li>").prependTo(".shopping_item_list"); $(".tile").removeClass("middle"); $(".shopping_item_list li:nth-child(3n+2)").addClass("middle"); }); This isn't working, and I can't figure out why. I'm new to jQuery. Also, I need to add the "delete" span to the list item and am not sure how. Thanks for any help you can offer! Edited to add: Thanks to the feedback here, I was able to make it work with: $("input[name='shopping_item']").change(function() { var item = $(this).val(); $("<li class='tile'>" + item + "<span class='delete'>Delete</span>" + " </li>").prependTo(".shopping_item_list"); $(".tile").removeClass("middle"); $(".shopping_item_list li:nth-child(3n+2)").addClass("middle"); }); However, in my jquery, I have functions for the classes "tile" and "delete" that are not working for newly added items. // hide delete button $(".delete").hide(); // delete square $(".tile").hover(function() { $(this).find(".delete").toggle(); }); $(".tile").on("click", ".delete", function() { $(this).closest("li.tile").remove(); $(".tile").removeClass("middle"); $(".shopping_item_list li:nth-child(3n+2)").addClass("middle"); }); // cross off list $(".tile").click(function() { $(this).toggleClass("deleteAction"); }); The new items, which have these classes applied aren't using these functions at all. Do I need to add the functions below the add item? Does the order in which they appear in my js file matter? Do I need to add some kind of function?

    Read the article

  • session variable values are not being passed between pages

    - by ravi nankani
    hi, i am a little new to php and although i have managed to pass values of session variables before this piece of code is leaving me puzzled <form action="team_reg2.php" method="post" name="form1" class="cent" id="form1"> Team Registration "; print "member$i"; print "\n"; print "\n"; print "Please enter only id\n"; } ? now this will pass via post to team_reg2.php echo " please note your team id is 1 "; echo " your team members are : "; for($i=1;$i<=$num;$i++) { $name='mem'.$i; echo "$_POST[$name]"; } } else { $str="select * from $query where ("; for($i=1;$i<=$num;$i++) { $name='mem'.$i; $text="p_id='$_POST[$name]'"; if($i==1) $str=$str.$text; else $str=$str.' or '.$text; } $str=$str.')'; $query2=$str; echo "$str"; // echo "$query2"; $que=mysql_query($query2,$con) or die(mysql_error()); $num=mysql_num_rows($que); if($num!=0) { while($result=mysql_fetch_array($que)) { echo "$result[p_id] is already registered in team $result[t_id]"; } //include("reg_team.html"); } else if($num==0) { //echo $query; $query2="select max(t_id) from $query"; $que=mysql_query($query2,$con) or die(mysql_error()); //echo "$que"; $result=mysql_fetch_array($que); $max=$result['max(t_id)']; $max++; $num=$_SESSION['max_team']; for($i=1;$i<=$num;$i++) { $name='mem'.$i; if($_POST[$name]!="") { $query2="insert into $query values($max,'$_POST[$name]')"; $que=mysql_query($query2,$con); } } echo " please note your team id is $max "; echo " your team members are : "; for($i=1;$i<=$num;$i++) { $name='mem'.$i; echo "$_POST[$name]"; } } } ? i have done session_start(); at the beginning of the page itself. The problem is that echoing $_SESSION variables in second file is not printing anything. someone please explain me whats going on. thank you

    Read the article

  • I can I separate multiple logical pages in a text file I create in Perl?

    - by Micah
    So far, I've been successful with generating output to individual files by opening a file for output as part of outer loop and closing it after all output is written. I had used a counting variable ($x) and appended .txt onto it to create a filename, and had written it to the same directory as my perl script. I want to step the code up a bit, prompt for a file name from the user, open that file once and only once, and write my output one "printed letter" per page. Is this possible in plain text? From what I understand, chr(12) is an ascii line feed character and will get me close to what I want, but is there a better way? Thanks in advance, guys. :) sub PersonalizeLetters{ print "\n\n Beginning finalization of letters..."; print "\n\n I need a filename to save these letters to."; print "\n Filename > "; $OutFileName = <stdin>; chomp ($OutFileName); open(OutFile, ">$OutFileName"); for ($x=0; $x<$NumRecords; $x++){ $xIndex = (6 * $x); $clTitle = @ClientAoA[$xIndex]; $clName = @ClientAoA[$xIndex+1]; #I use this 6x multiplier because my records have 6 elements. #For this routine I'm only interested in name and title. #Reset OutLetter array #Midletter has other merged fields that aren't specific to who's receiving the letter. @OutLetter = @MiddleLetter; for ($y=0; $y<=$ifLength; $y++){ #Step through line by line and insert the name. $WorkLine = @OutLetter[$y]; $WorkLine =~ s/\[ClientTitle\]/$clTitle/; $WorkLine =~ s/\[ClientName\]/$clName/; @OutLetter[$y] = $WorkLine; } print OutFile "@OutLetter"; #Will chr(12) work here, or is there something better? print OutFile chr(12); $StatusX = $x+1; print "Writing output $StatusX of $NumRecords... \n\n"; } close(OutFile); }

    Read the article

  • A versioning workflow for multiple similar (but not identical) deployments

    - by rs77
    I'm currently employed at a small non-tech organisation and have been given the role of coding the organisations' website. While I have enjoyed the task and have learnt much with web dev I've encountered a few issues that I'm hoping someone will be able to help with me or at least point me in the right direction on. A little background: The site I work on has subdomains that each have their own separate WordPress installation on - as this has been the easiest "backend" admin panel for the type of user who will be responsible for updating content (etc). Within the organisation I work under the Marketing Manager (MM) and I code according to his style guide and wire frames. While we have been working with only one subdomain since the beginning of the year the project has been relatively simple and straightforward. However, lately the workflow is becoming a little more complicated as our original subdomain has been copied over to the other subdomains. Each of the new subdomains receives minor edits to their stylesheets (eg. different pictures for background, slightly different colours here and there, etc). The issue: At the moment managing all the different subdomains has been "bearable", but the straw that's braking the camel's back at the moment has been the slight reversions the MM has required now that the CEO has seen the final product. The problem I'm having with reversions in stylesheets is that the CEO will one week state that he likes change "X" and then as the MM and I continue to modify the site (to now "Z"), will another week state that he wants us to change "X" to "W" but keeping most of the changes made in "Y". What I'm looking for is something that allows for: tracking file changes reverting changes made (or reverting back to 'a' from 'e' but including changes 'b' & 'c') easily upload necessary files to their respective WP-theme installation Does anything out there come close to addressing these issues? If so, what? Thanks for any help! PS - I'm learning Git at the moment and it seems to do the "tracking file changes" quite nicely. Haven't learnt about the reverting changes bit yet, though. Maybe for my final point I'm thinking of creating a shell script to automatically upload the files to their folders. Does Git do this too though? Addendum (alexbbrown) I had a similar problem: I ran a custom version of mediawiki where I installed various extensions in the versioned core (with svn). Each of the extensions required an section in the confit file, but the confit file also needed local configuration for each of several deployments. I could have implemented it using includes, but they would not be versioned; and rebasing branches each time is a chore. +50 experience points for a good answer in git.

    Read the article

  • c++ program debugged well with Cygwin4 (under Netbeans 7.2) but not with MinGW (under QT 4.8.1)

    - by GoldenAxe
    I have a c++ program which take a map text file and output it to a graph data structure I have made, I am using QT as I needed cross-platform program and GUI as well as visual representation of the map. I have several maps in different sizes (8x8 to 4096x4096). I am using unordered_map with a vector as key and vertex as value, I'm sending hash(1) and equal functions which I wrote to the unordered_map in creation. Under QT I am debugging my program with QT 4.8.1 for desktop MinGW (QT SDK), the program works and debug well until I try the largest map of 4096x4096, then the program stuck with the following error: "the inferior stopped because it received a signal from operating system", when debugging, the program halt at the hash function which used inside the unordered_map and not as part of the insertion state, but at a getter(2). Under Netbeans IDE 7.2 and Cygwin4 all works fine (debug and run). some code info: typedef std::vector<double> coordinate; typedef std::unordered_map<coordinate const*, Vertex<Element>*, container_hash, container_equal> vertexsContainer; vertexsContainer *m_vertexes (1) hash function: struct container_hash { size_t operator()(coordinate const *cord) const { size_t sum = 0; std::ostringstream ss; for ( auto it = cord->begin() ; it != cord->end() ; ++it ) { ss << *it; } sum = std::hash<std::string>()(ss.str()); return sum; } }; (2) the getter: template <class Element> Vertex<Element> *Graph<Element>::getVertex(const coordinate &cord) { try { Vertex<Element> *v = m_vertexes->at(&cord); return v; } catch (std::exception& e) { return NULL; } } I was thinking maybe it was some memory issue at the beginning, so before I was thinking of trying Netbeans I checked it with QT on my friend pc with a 16GB RAM and got the same error. Thanks.

    Read the article

  • How dangerous is e.preventDefault();, and can it be replaced by keydown/mousedown tracking?

    - by yc
    I'm working on a tracking script for a fairly sophisticated CRM for tracking form actions in Google Analytics. I'm trying to balance the desire to track form actions accurately with the need to never prevent a form from not working. Now, I know that doing something like this doesn't work. $('form').submit(function(){ _gaq.push('_trackEvent', 'Form', 'Submit', $(this).attr('action')) }); The DOM unloads before this has a chance to process. So, a lot of sample code recommends something like this: $('form').submit(function(e){ e.preventDefault(); var form = this; _gaq.push('_trackEvent', 'Form', 'Submit', $(this).attr('action')); //...do some other tracking stuff... setTimeout(function(){ form.submit(); }, 400); }); This is reliable in most cases, but it makes me nervous. What if something happens between e.preventDefault();and when I get around to triggering the DOM based submit? I've totally broken the form. I've been poking around some other analytics implementations, and I've noticed something like this: $('form').mousedown(function(){ _gaq.push('_trackEvent', 'Form', 'Submit', $(this).attr('action')); }); $('form').keydown(function(e){ if(e.which===13) //if the keydown is the enter key _gaq.push('_trackEvent', 'Form', 'Submit', $(this).attr('action')); }); Basically, instead of interrupting the form submit, preempting it by assuming that if someone is mousing down or keying down on Enter, than that form is submitted. Obviously, this will result in a certain amount of false positives, but it completely eliminates use of e.preventDefault();, which in my mind eliminates the risk that I might ever prevent a form from successfully submitting. So, my question: Is it possible to take the standard form tracking snippet and prevent it from ever fully preventing the form from submitting? Is the mousedown/keydown alternative viable? Are there any submission cases it may miss? Specifically, are there other ways to end up submitting besides the mouse and the keyboard enter? And will the browser always have time to process javascript before beginning to unload the page?

    Read the article

  • How to read & write contents of a div from localstorage in chrome extensions?

    - by Minas Abovyan
    I am trying to build an extension that will allow users to put some parameters into a text box in the popup, generate a link using that information and add it to the said popup. I have all that working, but needless to say, it gets flushed every time the user opens the extension anew. I'd like the info that has been put in there to stay, but can't seem to get it to work. Here's what I have thus far: manifest.json { "manifest_version": 2, "name": "Test", "description": "Test Extension", "version": "1.0", "permissions": [ "http://*/*", "https://*/*" ], "browser_action": { "default_title": "This is a test", "default_popup": "popup.html" } } popup.html <!DOCTYPE html> <html> <head> </head> <body> <div id="linkContainer"/> <input type="text" id="catsList"/> <button type="button" id="addToList">Add</button> <script src="popup.js"></script> </body> </html> popup.js function addCats() { var a = document.createElement('a'); a.appendChild(document.createTextNode(document.getElementById('catsList').value)); a.setAttribute('href', 'http://google.com'); var p = document.createElement('p'); p.appendChild(a) document.getElementById('linkContainer').appendChild(p); indexLinks() } function indexLinks() { var links = document.getElementsByTagName("a"); for (var i = 0; i < links.length; i++) { (function () { var ln = links[i]; var location = ln.href; ln.onclick = function () { chrome.tabs.create({active: true, url: location}); }; })(); } }; document.getElementById('addToList').onclick = addCats; My guess is that I need something along the lines of localStorage['cointainer'] = document.getElementById('linkContainer'); at the end of addCats() and a call to something like function loadLocalStorage() { var container = document.getElementById('linkContainer'); container.innerHTML = localStorage['container']; } at the beginning, but doing that didn't work. Not sure what's going wrong. Also,if there is a different way to save users' additions, I'd be open to them.

    Read the article

  • Is it possible to start (and stop) a thread inside a DLL?

    - by Jerry Dodge
    I'm pondering some ideas for building a DLL for some common stuff I do. One thing I'd like to check if it's possible is running a thread inside of a DLL. I'm sure I would be able to at least start it, and have it automatically free on terminate (and make it forcefully terminate its self) - that I can see wouldn't be much of a problem. But once I start it, I don't see how I can continue communicating with it (especially to stop it) mainly because each call to the DLL is unique (as far as my knowledge tells me) but I also know very little of the subject. I've seen how in some occasions, a DLL can be loaded at the beginning and released at the end when it's not needed anymore. I have 0 knowledge or experience with this method, other than just seeing something related to it, couldn't even tell you what or how, I don't remember. But is this even possible? I know about ActiveX/COM but that is not what I want - I'd like just a basic DLL that can be used across languages (specifically C#). Also, if it is possible, then how would I go about doing callbacks from the DLL to the app? For example, when I start the thread, I most probably will assign a function (which is inside the EXE) to be the handler for the events (which are triggered from the DLL). So I guess what I'm asking is - how to load a DLL for continuous work and release it when I'm done - as opposed to the simple method of calling individual functions in the DLL as needed. In the same case - I might assign variables or create objects inside the DLL. How can I assure that once I assign that variable (or create the object), how can I make sure that variable or object will still be available the next time I call the DLL? Obviously it would require a mechanism to Initialize/Finalize the DLL (I.E. create the objects inside the DLL when the DLL is loaded, and free the objects when the DLL is unloaded). EDIT: In the end, I will wrap the DLL inside of a component, so when an instance of the component is created, DLL will be loaded and a corresponding thread will be created inside the DLL, then when the component is free'd, the DLL is unloaded. Also need to make sure that if there are for example 2 of these components, that there will be 2 instances of the DLL loaded for each component. Is this in any way related to the use of an IInterface? Because I also have 0 experience with this. No need to answer it directly with sample source code - a link to a good tutorial would be great.

    Read the article

  • Unable to set up SSL support for Apache 2 on Debian

    - by Francesco
    I am trying to set up ssl support for Apache 2 on Debian. Versions are: Debian GNU/Linux 6.0 apache2 2.2.16-6+squeeze1 I followed a lot of how-tos for days but I couldn't make it work. Here are my steps and configuration files (ServerName and DocumentRoot are changed for privacy, in case tell me): # mkdir /etc/apache2/ssl # openssl req $@ -new -x509 -days 365 -nodes -out /etc/apache2/apache.pem -keyout /etc/apache2/apache.pem at this point I've a doubt about permissions on apache.pem, at this step they are -rw-r--r-- 1 root root Maybe it has to belong to www-data? Then I enable ssl-mod with # a2enmod ssl # /etc/init.d/apache2 restart I modify /etc/apache2/sites-available/default-ssl in this way (I put port 8080 because I need port 443 for another purpose): <VirtualHost *:8080> SSLEngine on SSLCertificateFile /etc/apache2/ssl/apache.pem SSLCertificateKeyFile /etc/apache2/ssl/apache.pem ServerAdmin webmaster@localhost DocumentRoot /var/www <Directory /> Options Indexes FollowSymLinks AllowOverride All </Directory> <Directory /var/www/> Options Indexes FollowSymLinks MultiViews AllowOverride All Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog ${APACHE_LOG_DIR}/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog ${APACHE_LOG_DIR}/access.log combined Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> </VirtualHost> <VirtualHost *:8080> DocumentRoot /home/user1/public_html/ ServerName first.server.org # Other directives here </VirtualHost> <VirtualHost *:8080> DocumentRoot /home/user2/public_html/ ServerName second.server.org # Other directives here </VirtualHost> I have to point out that the same configuration works on http (it is a copy of /etc/apache2/sites-available/default with some differences - port and ssl support). My /etc/apache2/ports.conf is the following: # If you just change the port or add more ports here, you will likely also # have to change the VirtualHost statement in # /etc/apache2/sites-enabled/000-default # This is also true if you have upgraded from before 2.2.9-3 (i.e. from # Debian etch). See /usr/share/doc/apache2.2-common/NEWS.Debian.gz and # README.Debian.gz #NameVirtualHost *:80 Listen 80 <IfModule mod_ssl.c> # If you add NameVirtualHost *:443 here, you will also have to change # the VirtualHost statement in /etc/apache2/sites-available/default-ssl # to <VirtualHost *:443> # Server Name Indication for SSL named virtual hosts is currently not # supported by MSIE on Windows XP. #NameVirtualHost *:8080 Listen 8080 </IfModule> <IfModule mod_gnutls.c> Listen 8080 </IfModule> Any suggestion? Thanks

    Read the article

  • Programmatically add an ISAPI extension dll in IIS 7 using ADSI?

    - by fretje
    I apologize beforehand, this is a cross post of this SO question. I thought I'd ask it there first, but apparently it doesn't harvest any answers there. I hope it will get more attention here. When I have an answer somewhere, I'll delete the other one. I'm trying to programmatically add an ISAPI extension dll in IIS using ADSI. This has been working for ages on previous versions of IIS, but it seems to fail on IIS 7. I am using similar code like shown in this question: var web = GetObject("IIS://localhost/W3SVC/1/ROOT/specificVirtualDirectory"); var maps = web.ScriptMaps.toArray(); map[maps.length] = ".aaa,c:\\path\\to\\isapi\\extension.dll,1,GET,POST"; web.ScriptMaps = maps.asDictionary(); web.SetInfo(); After executing that code, I do see an "AboMapperCustom-12345678" entry for that specific dll in the "Handler mappings" of the specific virtual directory in which I added the script map. But when I try to use that extension in a browser, I always get HTTP Error 404.2 Not Found The page you are requesting cannot be served because of the ISAPI and CGI Restriction list settings on the Web server. Even after adding an entry to allow that specific dll in the "ISAPI and CGI restrictions", I keep getting that error. To make it actually work, I first have to undo these steps (encountering the same issue like the OP of the question mentioned above: after deleting the script map entry from the IIS manager GUI, I also have to programmatically delete it using ADSI before it's actually gone from the metabase). And then manually add an entry like this: inetmgr - webserver - website - virtual directory - handler mappings - add script map... path = *.dll, executable = <path to dll>, name = <doesn't matter, but it's mandatory> click "yes" on the question "do you want to allow this ISAPI extension?" When I compare the 2 entries, they are exactly the same, except for the "Entry Type" which seems to be "Inherited" for the programmatically added one and "Local" for the one added manually. The strange thing is, even though it says "Inherited", I don't see it anywhere in IIS on a higher level. Where is it inheriting from? In my code, I do add the script map to the specific virtual directory so it should be "Local" as well. Maybe there is the problem, but I don't know how to add a "Local" Script Map using ADSI. I really would like to keep using the ADSI method, as otherwise I will have to use different methods in our setup when working with IIS 7 or previous versions, and I would like to avoid that. To recap: How can I programmatically add a script map entry and its companion CGI and ISAPI restrictions entry to IIS 7 using ADSI? Anybody who can shed some light on this? Any help appreciated.

    Read the article

  • Indesign Import XML into Automatic Page generation, data merge

    - by taudep
    I've created some InDesign Pages that I want to use as templates. I've created an XML file with all the appropriate data. I want to merge the XML data with the InDesign page and have a few hundred pages automatically generated. I've been reading online and working with InDesign's "Import XML" features without any luck. The documentation has been pretty poor for me. And Google searches haven't returned much fruitful. Edit: I'm updating this to now include my present steps 1) I create a Master Page of my template 2) I add a bunch of text frames where I want the imported data from the XML file to be places 3) I open the "Tags" window and Import and XML file 4) I mark my text frames in the Master Document with the appropriate tags 5) I then add a lot of pages (like 200) to the document 6) Then I use "Import XML" to try and get the data brought in and filled across all 200 pages. This is where I fail. So there's something I'm missing. It might be that InDesign doesn't work as I'm expecting... Anyone have any good tips for mail-merge like functionality with an XML document and auto-generation of InDesign pages? BTW, here's an example of Adobe's great documentation for merging repeated XML elements. There's gotta be more...InDesign CS4 Docs: XML-Importing XML-Working with Repeating Data EDIT: Here's some of the sample XML, notice the ITEM will repeat. I've also truncated the data in the "desc" tag: <output> <item> <user_name>taude</user_name> <date>2009-02-21</date> <title>Wishful Thinking</title> <desc>Skiing up in Vermont on a beautiful day. This photo of</desc> <thumbnail>http://www.blipfoto.com/thumbs/5371/2009/big/color/96104200949a162672e1996.15963073.jpeg</thumbnail> </item> <item> <user_name>taude</user_name> <date>2009-02-22</date> <title>Skiing Self Portrait</title> <desc>I was inspired by ML's self-portrait while </desc> <thumbnail>http://www.blipfoto.com/thumbs/5371/2009/big/color/36547696749a2c5782308e0.91477014.jpeg</thumbnail> </item> </output> Here's what my imported XML looks like with the InDesign Structure

    Read the article

  • Migrate from MySQL to PostgreSQL on Linux (Kubuntu)

    - by Dave Jarvis
    Storyline Trying to migrate a database from MySQL to PostgreSQL. All the documentation I have read covers, in great detail, how to migrate the structure. I have found very little documentation on migrating the data. The schema has 13 tables (which have been migrated successfully) and 9 GB of data. MySQL version: 5.1.x PostgreSQL version: 8.4.x I want to use the R programming language to analyze the data using SQL select statements; PostgreSQL has PL/R, but MySQL has nothing (as far as I can tell). A long time ago in a galaxy far, far away... Create the database location (/var has insufficient space; also dislike having the PostgreSQL version number everywhere -- upgrading would break scripts!): sudo mkdir -p /home/postgres/main sudo cp -Rp /var/lib/postgresql/8.4/main /home/postgres sudo chown -R postgres.postgres /home/postgres sudo chmod -R 700 /home/postgres sudo usermod -d /home/postgres/ postgres All good to here. Next, restart the server and configure the database using these installation instructions: sudo apt-get install postgresql pgadmin3 sudo /etc/init.d/postgresql-8.4 stop sudo vi /etc/postgresql/8.4/main/postgresql.conf Change data_directory to /home/postgres/main sudo /etc/init.d/postgresql-8.4 start sudo -u postgres psql postgres \password postgres sudo -u postgres createdb climate pgadmin3 Use pgadmin3 to configure the database and create a schema. A New Hope The episode began in a remote shell known as bash, with both databases running, and the installation of a command with a most unusual logo: SQL Fairy. perl Makefile.PL sudo make install sudo apt-get install perl-doc (strangely, it is not called perldoc) perldoc SQL::Translator::Manual Extract a PostgreSQL-friendly DDL and all the MySQL data: sqlt -f DBI --dsn dbi:mysql:climate --db-user user --db-password password -t PostgreSQL > climate-pg-ddl.sql mysqldump --skip-add-locks --complete-insert --no-create-db --no-create-info --quick --result-file="climate-my.sql" --databases climate --skip-comments -u root -p The Database Strikes Back Recreate the structure in PostgreSQL as follows: pgadmin3 (switch to it) Click the Execute arbitrary SQL queries icon Open climate-pg-ddl.sql Search for TABLE " replace with TABLE climate." (insert the schema name climate) Search for on " replace with on climate." (insert the schema name climate) Press F5 to execute This results in: Query returned successfully with no result in 122 ms. Replies of the Jedi At this point I am stumped. Where do I go from here (what are the steps) to convert climate-my.sql to climate-pg.sql so that they can be executed against PostgreSQL? How to I make sure the indexes are copied over correctly (to maintain referential integrity; I don't have constraints at the moment to ease the transition)? How do I ensure that adding new rows in PostgreSQL will start enumerating from the index of the last row inserted (and not conflict with an existing primary key from the sequence)? Resources A fair bit of information was needed to get this far: https://help.ubuntu.com/community/PostgreSQL http://articles.sitepoint.com/article/site-mysql-postgresql-1 http://wiki.postgresql.org/wiki/Converting_from_other_Databases_to_PostgreSQL#MySQL http://pgfoundry.org/frs/shownotes.php?release_id=810 http://sqlfairy.sourceforge.net/ Thank you!

    Read the article

  • Hudson on debian lenny

    - by Laurent
    Hello, I installed Hudson deamon on one server (running on debian lenny testing) some time ago. All was working until I perform an upgrade. At this time Hudson isn't accessible at port 8080 (which is the default port used). I have looked for iptables problems, however port 8080 is open in INPUT and OUTPUT. Configuration file in /etc/default/hudson seems okay, I haven't touch it. And if I do a ps aux | grep hudson, hudson deamon is running. Update 1: What is really strange for me is that in /var/log/hudson/hudson.log I get no error : [Winstone 2010/02/10 17:10:04] - Control thread shutdown successfully [Winstone 2010/02/10 17:10:04] - Winstone shutdown successfully Running from: /usr/share/hudson/hudson.war [Winstone 2010/02/10 17:10:43] - Beginning extraction from war file hudson home directory: /var/lib/hudson [Winstone 2010/02/10 17:10:44] - HTTP Listener started: port=8080 [Winstone 2010/02/10 17:10:44] - AJP13 Listener started: port=8009 [Winstone 2010/02/10 17:10:44] - Winstone Servlet Engine v0.9.10 running: controlPort=disabled 10 févr. 2010 17:10:44 hudson.model.Hudson$4 onAttained INFO: Started initialization 10 févr. 2010 17:10:44 hudson.model.Hudson$4 onAttained INFO: Listed all plugins 10 févr. 2010 17:10:44 hudson.model.Hudson$4 onAttained INFO: Prepared all plugins 10 févr. 2010 17:10:44 hudson.model.Hudson$4 onAttained INFO: Started all plugins 10 févr. 2010 17:10:46 hudson.model.Hudson$4 onAttained INFO: Loaded all jobs 10 févr. 2010 17:10:46 hudson.model.Hudson$4 onAttained INFO: Completed initialization 10 févr. 2010 17:10:47 org.springframework.context.support.AbstractApplicationContext prepareRefresh INFO: Refreshing org.springframework.web.context.support.StaticWebApplicationContext@caa559d: display name [Root WebApplicationContext]; startup date [Wed Feb 10 17:10:47 CET 2010]; root of context hierarchy 10 févr. 2010 17:10:47 org.springframework.context.support.AbstractApplicationContext obtainFreshBeanFactory INFO: Bean factory for application context [org.springframework.web.context.support.StaticWebApplicationContext@caa559d]: org.springframework.beans.factory.support.DefaultListableBeanFactory@40d2f5f1 10 févr. 2010 17:10:47 org.springframework.beans.factory.support.DefaultListableBeanFactory preInstantiateSingletons INFO: Pre-instantiating singletons in org.springframework.beans.factory.support.DefaultListableBeanFactory@40d2f5f1: defining beans [daoAuthenticationProvider,authenticationManager,userDetailsService]; root of factory hierarchy 10 févr. 2010 17:10:47 org.springframework.context.support.AbstractApplicationContext prepareRefresh INFO: Refreshing org.springframework.web.context.support.StaticWebApplicationContext@4d88a387: display name [Root WebApplicationContext]; startup date [Wed Feb 10 17:10:47 CET 2010]; root of context hierarchy 10 févr. 2010 17:10:47 org.springframework.context.support.AbstractApplicationContext obtainFreshBeanFactory INFO: Bean factory for application context [org.springframework.web.context.support.StaticWebApplicationContext@4d88a387]: org.springframework.beans.factory.support.DefaultListableBeanFactory@6153e0c0 10 févr. 2010 17:10:47 org.springframework.beans.factory.support.DefaultListableBeanFactory preInstantiateSingletons INFO: Pre-instantiating singletons in org.springframework.beans.factory.support.DefaultListableBeanFactory@6153e0c0: defining beans [filter,legacy]; root of factory hierarchy 10 févr. 2010 17:10:47 hudson.TcpSlaveAgentListener <init> INFO: JNLP slave agent listener started on TCP port 59750 Update 2: What I get with lsof -i -n -P | grep hudson: java 28985 hudson 97u IPv6 2002707 0t0 TCP *:8080 (LISTEN) java 28985 hudson 99u IPv6 2002708 0t0 TCP *:8009 (LISTEN) java 28985 hudson 147u IPv6 2002711 0t0 TCP *:59750 (LISTEN) java 28985 hudson 150u IPv6 2002712 0t0 UDP *:33848 I don't know what I can verify. Does someone has an idea in order to help me to resolve this problem ?

    Read the article

  • ubuntu bind9 AppArmor read permission denied (chroot jail)

    - by Richard Whitman
    I am trying to run bind9 with chroot jail. I followed the steps mentioned at : http://www.howtoforge.com/debian_bind9_master_slave_system I am getting the following errors in my syslog: Jul 27 16:53:49 conf002 named[3988]: starting BIND 9.7.3 -u bind -t /var/lib/named Jul 27 16:53:49 conf002 named[3988]: built with '--prefix=/usr' '--mandir=/usr/share/man' '--infodir=/usr/share/info' '--sysconfdir=/etc/bind' '--localstatedir=/var' '--enable-threads' '--enable-largefile' '--with-libtool' '--enable-shared' '--enable-static' '--with-openssl=/usr' '--with-gssapi=/usr' '--with-gnu-ld' '--with-dlz-postgres=no' '--with-dlz-mysql=no' '--with-dlz-bdb=yes' '--with-dlz-filesystem=yes' '--with-dlz-ldap=yes' '--with-dlz-stub=yes' '--with-geoip=/usr' '--enable-ipv6' 'CFLAGS=-fno-strict-aliasing -DDIG_SIGCHASE -O2' 'LDFLAGS=-Wl,-Bsymbolic-functions' 'CPPFLAGS=' Jul 27 16:53:49 conf002 named[3988]: adjusted limit on open files from 4096 to 1048576 Jul 27 16:53:49 conf002 named[3988]: found 4 CPUs, using 4 worker threads Jul 27 16:53:49 conf002 named[3988]: using up to 4096 sockets Jul 27 16:53:49 conf002 named[3988]: loading configuration from '/etc/bind/named.conf' Jul 27 16:53:49 conf002 named[3988]: none:0: open: /etc/bind/named.conf: permission denied Jul 27 16:53:49 conf002 named[3988]: loading configuration: permission denied Jul 27 16:53:49 conf002 named[3988]: exiting (due to fatal error) Jul 27 16:53:49 conf002 kernel: [74323.514875] type=1400 audit(1343433229.352:108): apparmor="DENIED" operation="open" parent=3987 profile="/usr/sbin/named" name="/var/lib/named/etc/bind/named.conf" pid=3992 comm="named" requested_mask="r" denied_mask="r" fsuid=103 ouid=103 Looks like the process can not read the file /var/lib/named/etc/bind/named.conf. I have made sure that the owner of this file is user bind, and it has the read/write access to it: root@test:/var/lib/named/etc/bind# ls -atl total 64 drwxr-xr-x 3 bind bind 4096 2012-07-27 16:35 .. drwxrwsrwx 2 bind bind 4096 2012-07-27 15:26 zones drwxr-sr-x 3 bind bind 4096 2012-07-26 21:36 . -rw-r--r-- 1 bind bind 666 2012-07-26 21:33 named.conf.options -rw-r--r-- 1 bind bind 514 2012-07-26 21:18 named.conf.local -rw-r----- 1 bind bind 77 2012-07-25 00:25 rndc.key -rw-r--r-- 1 bind bind 2544 2011-07-14 06:31 bind.keys -rw-r--r-- 1 bind bind 237 2011-07-14 06:31 db.0 -rw-r--r-- 1 bind bind 271 2011-07-14 06:31 db.127 -rw-r--r-- 1 bind bind 237 2011-07-14 06:31 db.255 -rw-r--r-- 1 bind bind 353 2011-07-14 06:31 db.empty -rw-r--r-- 1 bind bind 270 2011-07-14 06:31 db.local -rw-r--r-- 1 bind bind 2994 2011-07-14 06:31 db.root -rw-r--r-- 1 bind bind 463 2011-07-14 06:31 named.conf -rw-r--r-- 1 bind bind 490 2011-07-14 06:31 named.conf.default-zones -rw-r--r-- 1 bind bind 1317 2011-07-14 06:31 zones.rfc1918 What could be wrong here?

    Read the article

  • Random servers in Citrix servers suddenly bluescreens (mostly 0x0000008e and 0x0000007e)

    - by Rasmus Rask
    I'm responsible for a Citrix Presentation Server 4.5 farm. Starting Friday 30. November, my servers started to crash randomly. So far we've experienced 80 crashes, so it's obviously becoming an increasingly big problem for us. I have 12+ years experience with IT, so I know the difference between 0 and 1, but I have a hard time cracking this. We've rolled back any recent changes I can think of for different groups of servers, but all groups still seem to crash. I don't have the skills to interpret the memory dumps to find the culprit. Has anyone encountered the same or a similar problem? - might be a generic Windows issue Other than executing "analyze -v" in WinDbg, how do I work my way through the memory dumps to see what actually triggered the BSOD? Any suggested steps in getting to the bottom of this? Any help is greatly appreciated. I can also provide links to kernel memory dumps or WinDbg output if necessary. Thanks! Problem description The majority of the STOP errors we encounter are: 0x0000008e KERNEL_MODE_EXCEPTION_NOT_HANDLED (50%) 0x0000007e SYSTEM_THREAD_EXCEPTION_NOT_HANDLED (26%) 0x00000050 PAGE_FAULT_IN_NONPAGED_AREA (21%) We also see a few 0x0000000a IRQL_NOT_LESS_OR_EQUAL (3%). For both 0x0000008e and 0x0000007e bug checks, the exception code is 0xc0000005 (Access Violation). When opening dump files in WinDbg, most details are exactly the same, for all the 0x0000008e and 0x0000007e bug checks respectively: 0x0000008e Exception address: 0x808bc9e3 Trap frame: [varies] FAILURE_BUCKET_ID: 0x8E_nt!HvpGetCellMapped+97 Probably Caused by (IMAGE_NAME): ntkrpamp.exe 0x0000007e Exception address: 0x808369b6 Exception record address: 0xf70d3be0 Context record address: 0xf70d38dc FAILURE_BUCKET_ID: 0x7E_nt!MmPurgeSection+14 Probably Caused by: memory_corruption About 30% of the crashes happens between 17:00 and 19:00, which leads me to believe this tend to happen more often during logoffs. But then again, only ~15% occurs between 15:00 and 17:00. Summary of farm Citrix Presentation Server 4.5 R06 on Windows Server 2003 R2 SP2 All high priority patches, at least as of October installed Virtualized using VMWare ESX/vSphere 4.1 on HP Proliant BL460c G6 blade servers About 53 Presentation Servers in production, divided into three silos - only one of which, the largest, is affected 2 vCPU's (5 GHz reserved), 8 GB RAM (all reserved) for each Presentation Server Plenty of free disk space Very few printer drivers - automated deletion of non-approved drivers every night ~1.000 peak concurrent users, which is reached at around 10:30 (on weekdays) Number of sessions steadily decline between 15:00 and 19:00 to ~230

    Read the article

  • Windows 7 Samba issue

    - by abduls85
    We have a strange samba issue affecting only one user. Our samba setup is as follow : Red Hat Enterprise Linux Server release 5.4 (Tikanga) - Samba Server Samba version 3.0.33-3.14.el5 - Samba version Domain Controller WIN2008R2 Standard - Windows DC Windows 7 64 bit - Client PCs User mentioned that he faced this problem after he force shutdown his PC few weeks ago. By right, for all users when we access \\sambaservername in windows it will show all the shares in the samba server but for this user once he startup his PC he will not be able to access \\sambaservername, Error message Windows cannot access \\sambaservername Current workaround to solve the problem : Try to access one share in \\sambaservername for instance \\sambaservername\sharedfolder1. But even when doing so, it will first prompt an error in the beginning, error message is as follows Logon failure: unknown user name or bad password. user need to enter the credentials again and he can access the share. Thereafter, he will be able to access \\sambaservername without any issues. But once he reboots his computer the problem will persists. Troubleshooting done so far: Ensure the following settings: Go to: Control Panel → Administrative Tools → Local Security Policy Select: Local Policies → Security Options "Network security: LAN Manager authentication level" → Send LM & NTLM responses "Minimum session security for NTLM SSP" → uncheck: Require 128-bit encryption Advise user to reset his password and try again but problem still persists Tried my account on users' PC, there is no issues. Tried user account on serveral other Windows 7 PC including mine but problem still persists. Windows XP does not have this problem. Ensure that there is no stored crendentials on the windows 7 PC. Checked the credentials manager in Control Panel as well as typing this command rundll32.exe keymgr.dll, KRShowKeyMgr Restart winbindd daemon on samba server but to no avail. I suspect this is due to some caching issue but not sure where is the issue. Whenever the user has error accessing \\sambaservername, the following errors will be logged in the samba server : [2012/10/10 17:10:26, 1] smbd/sesssetup.c:reply_spnego_kerberos(316) Failed to verify incoming ticket with error NT_STATUS_LOGON_FAILURE! [2012/10/10 17:10:27, 1] smbd/sesssetup.c:reply_spnego_kerberos(316) Failed to verify incoming ticket with error NT_STATUS_LOGON_FAILURE! [2012/10/10 17:10:27, 1] smbd/sesssetup.c:reply_spnego_kerberos(316) Failed to verify incoming ticket with error NT_STATUS_LOGON_FAILURE! [2012/10/10 17:10:27, 1] smbd/sesssetup.c:reply_spnego_kerberos(316) Failed to verify incoming ticket with error NT_STATUS_LOGON_FAILURE! [2012/10/10 17:10:27, 1] smbd/sesssetup.c:reply_spnego_kerberos(316) Failed to verify incoming ticket with error NT_STATUS_LOGON_FAILURE! [2012/10/10 17:10:27, 1] smbd/sesssetup.c:reply_spnego_kerberos(316) Failed to verify incoming ticket with error NT_STATUS_LOGON_FAILURE! [2012/10/10 17:10:27, 1] smbd/sesssetup.c:reply_spnego_kerberos(316) Failed to verify incoming ticket with error NT_STATUS_LOGON_FAILURE! [2012/10/10 17:10:27, 1] smbd/sesssetup.c:reply_spnego_kerberos(316) Failed to verify incoming ticket with error NT_STATUS_LOGON_FAILURE! [2012/10/10 17:10:27, 1] smbd/sesssetup.c:reply_spnego_kerberos(316) Failed to verify incoming ticket with error NT_STATUS_LOGON_FAILURE! [2012/10/10 17:10:27, 1] smbd/sesssetup.c:reply_spnego_kerberos(316) Failed to verify incoming ticket with error NT_STATUS_LOGON_FAILURE! But after workaround, there will be no more errors. I suspect after reading the article listed below some amendments need to be made to the \var\samba\cache directory : http://www.linuxquestions.org/questions/linux-server-73/getent-passwd-dont-show-ad-groups-and-users-745829/ http://www.samba.org/samba/docs/man/Samba-HOWTO-Collection/tdb.html http://lists.samba.org/archive/samba/2010-May/155521.html http://lists.samba.org/archive/samba/2011-March/161912.html http://lzeit.blogspot.sg/2009/10/samba-shares-inaccessible-after-power.html There are several users using the samba server and i would like to solve this problem without any impacts. I saw the following article : http://www.samba.org/samba/docs/man/manpages-3/smb.conf.5.html#WINBINDCACHETIME "winbind offline logon (G) This parameter is designed to control whether Winbind should allow to login with the pam_winbind module using Cached Credentials. If enabled, winbindd will store user credentials from successful logins encrypted in a local cache. Default: winbind offline logon = false Example: winbind offline logon = true " Any idea on how to delete the entry for one user in the local cache ?

    Read the article

  • Squid w/ SquidGuard fails w/ "Too few redirector processes are running"

    - by DKNUCKLES
    I'm trying to implement a Squid proxy in a quick and easy fashion and I'm receiving some errors I have been unable to resolve. The box is a pre-made appliance, however it seems to fail on launch.The following is the cache.log file when I attempt to launch the squid service. 2012/11/18 22:14:29| Starting Squid Cache version 3.0.STABLE20-20091201 for i686 -pc-linux-gnu... 2012/11/18 22:14:29| Process ID 12647 2012/11/18 22:14:29| With 1024 file descriptors available 2012/11/18 22:14:29| Performing DNS Tests... 2012/11/18 22:14:29| Successful DNS name lookup tests... 2012/11/18 22:14:29| DNS Socket created at 0.0.0.0, port 40513, FD 8 2012/11/18 22:14:29| Adding nameserver 192.168.0.78 from /etc/resolv.conf 2012/11/18 22:14:29| Adding nameserver 8.8.8.8 from /etc/resolv.conf 2012/11/18 22:14:29| helperOpenServers: Starting 5/5 'bin' processes 2012/11/18 22:14:29| ipcCreate: /opt/squidguard/bin: (13) Permission denied 2012/11/18 22:14:29| ipcCreate: /opt/squidguard/bin: (13) Permission denied 2012/11/18 22:14:29| ipcCreate: /opt/squidguard/bin: (13) Permission denied 2012/11/18 22:14:29| ipcCreate: /opt/squidguard/bin: (13) Permission denied 2012/11/18 22:14:29| ipcCreate: /opt/squidguard/bin: (13) Permission denied 2012/11/18 22:14:29| helperOpenServers: Starting 5/5 'squid-auth.pl' processes 2012/11/18 22:14:29| User-Agent logging is disabled. 2012/11/18 22:14:29| Referer logging is disabled. 2012/11/18 22:14:29| Unlinkd pipe opened on FD 23 2012/11/18 22:14:29| Swap maxSize 10240000 + 8192 KB, estimated 788322 objects 2012/11/18 22:14:29| Target number of buckets: 39416 2012/11/18 22:14:29| Using 65536 Store buckets 2012/11/18 22:14:29| Max Mem size: 8192 KB 2012/11/18 22:14:29| Max Swap size: 10240000 KB 2012/11/18 22:14:29| Version 1 of swap file with LFS support detected... 2012/11/18 22:14:29| Rebuilding storage in /opt/squid3/var/cache (DIRTY) 2012/11/18 22:14:29| Using Least Load store dir selection 2012/11/18 22:14:29| Set Current Directory to /opt/squid3/var/cache 2012/11/18 22:14:29| Loaded Icons. 2012/11/18 22:14:29| Accepting HTTP connections at 10.0.0.6, port 3128, FD 25. 2012/11/18 22:14:29| Accepting ICP messages at 0.0.0.0, port 3130, FD 26. 2012/11/18 22:14:29| HTCP Disabled. 2012/11/18 22:14:29| Ready to serve requests. 2012/11/18 22:14:29| Done reading /opt/squid3/var/cache swaplog (0 entries) 2012/11/18 22:14:29| Finished rebuilding storage from disk. 2012/11/18 22:14:29| 0 Entries scanned 2012/11/18 22:14:29| 0 Invalid entries. 2012/11/18 22:14:29| 0 With invalid flags. 2012/11/18 22:14:29| 0 Objects loaded. 2012/11/18 22:14:29| 0 Objects expired. 2012/11/18 22:14:29| 0 Objects cancelled. 2012/11/18 22:14:29| 0 Duplicate URLs purged. 2012/11/18 22:14:29| 0 Swapfile clashes avoided. 2012/11/18 22:14:29| Took 0.02 seconds ( 0.00 objects/sec). 2012/11/18 22:14:29| Beginning Validation Procedure 2012/11/18 22:14:29| WARNING: redirector #1 (FD 9) exited 2012/11/18 22:14:29| WARNING: redirector #2 (FD 10) exited 2012/11/18 22:14:29| WARNING: redirector #3 (FD 11) exited 2012/11/18 22:14:29| WARNING: redirector #4 (FD 12) exited 2012/11/18 22:14:29| Too few redirector processes are running FATAL: The redirector helpers are crashing too rapidly, need help! Squid Cache (Version 3.0.STABLE20-20091201): Terminated abnormally. CPU Usage: 0.112 seconds = 0.032 user + 0.080 sys Maximum Resident Size: 0 KB Page faults with physical i/o: 0 Memory usage for squid via mallinfo(): total space in arena: 2944 KB Ordinary blocks: 2857 KB 6 blks Small blocks: 0 KB 0 blks Holding blocks: 1772 KB 8 blks Free Small blocks: 0 KB Free Ordinary blocks: 86 KB Total in use: 4629 KB 157% Total free: 86 KB 3% The "permission denied" area is where I have been focusing my attention with no luck. The following is what I've tried. Chmod'ing the /opt/squidguard/bin folder to 777 Changing the user that squidguard runs under to root / nobody / www-data / squid3 Tried changing ownership of the /opt/squidguard/bin folder to all names listed above after assigning that user to run with squid. Any help with this would be greatly appreciated.

    Read the article

  • Monitoring slow nginx/unicorn requests

    - by injekt
    I'm currently using Nginx to proxy requests to a Unicorn server running a Sinatra application. The application only has a couple of routes defined, those of which make fairly simple (non costly) queries to a PostgreSQL database, and finally return data in JSON format, these services are being monitored by God. I'm currently experiencing extremely slow response times from this application server. I have another two Unicorn servers being proxied via Nginx, and these are responding perfectly fine, so I think I can rule out any wrong doing from Nginx. Here is my God configuration: # God configuration APP_ROOT = File.expand_path '../', File.dirname(__FILE__) God.watch do |w| w.name = "app_name" w.interval = 30.seconds # default w.start = "cd #{APP_ROOT} && unicorn -c #{APP_ROOT}/config/unicorn.rb -D" # -QUIT = graceful shutdown, waits for workers to finish their current request before finishing w.stop = "kill -QUIT `cat #{APP_ROOT}/tmp/unicorn.pid`" w.restart = "kill -USR2 `cat #{APP_ROOT}/tmp/unicorn.pid`" w.start_grace = 10.seconds w.restart_grace = 10.seconds w.pid_file = "#{APP_ROOT}/tmp/unicorn.pid" # User under which to run the process w.uid = 'web' w.gid = 'web' # Cleanup the pid file (this is needed for processes running as a daemon) w.behavior(:clean_pid_file) # Conditions under which to start the process w.start_if do |start| start.condition(:process_running) do |c| c.interval = 5.seconds c.running = false end end # Conditions under which to restart the process w.restart_if do |restart| restart.condition(:memory_usage) do |c| c.above = 150.megabytes c.times = [3, 5] # 3 out of 5 intervals end restart.condition(:cpu_usage) do |c| c.above = 50.percent c.times = 5 end end w.lifecycle do |on| on.condition(:flapping) do |c| c.to_state = [:start, :restart] c.times = 5 c.within = 5.minute c.transition = :unmonitored c.retry_in = 10.minutes c.retry_times = 5 c.retry_within = 2.hours end end end Here is my Unicorn configuration: # Unicorn configuration file APP_ROOT = File.expand_path '../', File.dirname(__FILE__) worker_processes 8 preload_app true pid "#{APP_ROOT}/tmp/unicorn.pid" listen 8001 stderr_path "#{APP_ROOT}/log/unicorn.stderr.log" stdout_path "#{APP_ROOT}/log/unicorn.stdout.log" before_fork do |server, worker| old_pid = "#{APP_ROOT}/tmp/unicorn.pid.oldbin" if File.exists?(old_pid) && server.pid != old_pid begin Process.kill("QUIT", File.read(old_pid).to_i) rescue Errno::ENOENT, Errno::ESRCH # someone else did our job for us end end end I have checked God status logs but it appears CPU and Memory Usage are never out of bounds. I also have something to kill high memory workers, which can be found on the GitHub blog page here. When running a tail -f on the Unicorn logs I see some requests, but they're far and few between, when I was at around 60-100 a second before this trouble seemed to have arrived. This log also shows workers being reaped and started as expected. So my question is, how would I go about debugging this? What are the next steps I should be taking? I'm extremely baffled that the server will sometimes respond quickly, but at others time it's very slow, for long periods of time (which may or may not be peak traffic times). Any advice is much appreciated.

    Read the article

  • Indesign Import XML into Automatic Page generation, data merge

    - by taudep
    I've created some InDesign Pages that I want to use as templates. I've created an XML file with all the appropriate data. I want to merge the XML data with the InDesign page and have a few hundred pages automatically generated. I've been reading online and working with InDesign's "Import XML" features without any luck. The documentation has been pretty poor for me. And Google searches haven't returned much fruitful. Here are my present steps: I create a Master Page of my template I add a bunch of text frames where I want the imported data from the XML file to be places I open the "Tags" window and Import and XML file I mark my text frames in the Master Document with the appropriate tags I then add a lot of pages (like 200) to the document Then I use "Import XML" to try and get the data brought in and filled across all 200 pages. This is where I fail. There's something I'm missing. It might be that InDesign doesn't work as I'm expecting... Does anyone have any good tips for mail-merge like functionality with an XML document and auto-generation of InDesign pages? By the way, here's an example of Adobe's great documentation for merging repeated XML elements. There's got to be more... InDesign CS4 Docs: XML-Importing XML-Working with Repeating Data Here's some of the sample XML, notice the ITEM will repeat. I've also truncated the data in the "desc" tag: <output> <item> <user_name>taude</user_name> <date>2009-02-21</date> <title>Wishful Thinking</title> <desc>Skiing up in Vermont on a beautiful day. This photo of</desc> <thumbnail>http://www.blipfoto.com/thumbs/5371/2009/big/color/96104200949a162672e1996.15963073.jpeg</thumbnail> </item> <item> <user_name>taude</user_name> <date>2009-02-22</date> <title>Skiing Self Portrait</title> <desc>I was inspired by ML's self-portrait while </desc> <thumbnail>http://www.blipfoto.com/thumbs/5371/2009/big/color/36547696749a2c5782308e0.91477014.jpeg</thumbnail> </item> </output> Here's what my imported XML looks like with the InDesign Structure:

    Read the article

  • gmail dkim=neutral (no signature)

    - by Bretticus
    After testing much and retracing my steps, I still cannot get google mail to validate. My mail server is Debian 5.0 with exim Exim version 4.72 #1 built 31-Jul-2010 08:12:17 Copyright (c) University of Cambridge, 1995 - 2007 Berkeley DB: Berkeley DB 4.8.24: (August 14, 2009) Support for: crypteq iconv() IPv6 PAM Perl Expand_dlfunc GnuTLS move_frozen_messages Content_Scanning DKIM Old_Demime Lookups: lsearch wildlsearch nwildlsearch iplsearch cdb dbm dbmnz dnsdb dsearch ldap ldapdn ldapm mysql nis nis0 passwd pgsql sqlite Authenticators: cram_md5 cyrus_sasl dovecot plaintext spa Routers: accept dnslookup ipliteral iplookup manualroute queryprogram redirect Transports: appendfile/maildir/mailstore/mbx autoreply lmtp pipe smtp Fixed never_users: 0 Size of off_t: 8 GnuTLS compile-time version: 2.4.2 GnuTLS runtime version: 2.4.2 Configuration file is /var/lib/exim4/config.autogenerated My remote smtp transport configuration: remote_smtp: debug_print = "T: remote_smtp for $local_part@$domain" driver = smtp helo_data = mailer.mydomain.com dkim_domain = mydomain.com dkim_selector = mailer dkim_private_key = /etc/exim4/dkim/mailer.mydomain.com.key dkim_canon = relaxed .ifdef REMOTE_SMTP_HOSTS_AVOID_TLS hosts_avoid_tls = REMOTE_SMTP_HOSTS_AVOID_TLS .endif .ifdef REMOTE_SMTP_HEADERS_REWRITE headers_rewrite = REMOTE_SMTP_HEADERS_REWRITE .endif .ifdef REMOTE_SMTP_RETURN_PATH return_path = REMOTE_SMTP_RETURN_PATH .endif .ifdef REMOTE_SMTP_HELO_FROM_DNS helo_data=REMOTE_SMTP_HELO_DATA .endif The path to my private key is correct. I see a DKIM header in my messages as they end up in my gmail account: DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=mydomain.com; s=mailer; h=Content-Type:MIME-Version:Message-ID:Date:Subject:Reply-To:To:From; bh=nKgQAFyGv<snip>tg=; b=m84lyYvX6<snip>RBBqmW52m1ce2g=; However, gmail headers always report dkim=neutral (no signature): dkim=neutral (no signature) [email protected] My DNS results: dig +short txt mailer._domainkey.mydomain.com mailer._domainkey. mydomain.com descriptive text "v=DKIM1\; k=rsa\; t=y\; p=LS0tLS1CRUdJ<snip>M0RRRUJBUVV" "BQTRHTkFEQ0J<snip>GdLamdaaG" "JwaFZkai93b3<snip>laSCtCYmdsYlBrWkdqeVExN3gxN" "mpQTzF6OWJDN3hoY21LNFhaR0NjeENMR0FmOWI4Z<snip>tLQo=" Note that the base64 public key is 364 chars long so I had to break up the key using bind9. $ORIGIN _domainkey. mydomain.com. mailer TXT ("v=DKIM1; k=rsa; t=y; p=LS0tLS1CRUdJTiBQVUJM<snip>U0liM0RRRUJBUVV" "BQTRHTkFEQ0JpUUtCZ1<snip>15MGdLamdaaG" "JwaFZkai93b3lDK21MR<snip>YlBrWkdqeVExN3gxN" "mpQTzF6OWJDN3hoY21L<snip>Ci0tLS0tRU5E" "IFBVQkxJQyBLRVktLS0tLQo=") Can anyone point me in the right direction? I would really appreciate it.

    Read the article

  • TrueCrypt drive letter not available

    - by Tono Nam
    With c# or a batch file I mount a trueCrypt volume located at A:\volumeTrueCrypt.tc With c# I do: static void Main(string[] args) { var p = Process.Start( fileName:@"C:\Program Files\TrueCrypt\TrueCrypt.exe", arguments:@"/v a:\volumetruecrypt.tc /lw /a /p truecrypt" ); p.WaitForExit(); } the alternative is to run the command on the command line as: C:\Windows\system32>"C:\Program Files\TrueCrypt\TrueCrypt.exe" /v "a:\volumetruecrypt.tc" /lw /a /p truecrypt Either way I get the error: Why do I get that error? I was able to run that command the first time. The moment I dismounted the volume and tryied to mount it again I got that error. I know that drive letter W is available because it shows as an available letter on true crypt if I where to open it manually: If I where then click on the button mount and then type the password truecrypt (truecrypt is the password) then it will successfully mount on drive w. Why I am not able to mount it from the command line!? If I change the drive letter on the command line it works. I want to use the drive W though. In other words executing "C:\Program Files\TrueCrypt\TrueCrypt.exe" /v "a:\volumetruecrypt.tc" /lz /a /p truecrypt will successfully mount that volume on drive z but I do not want to mount it on drive z I want to mount it on drive w. The first time I ran the batch it ran fine. Also if I restart my computer I believe it should work. More info on how to use trueCrypt through the command line can be found at: http://www.truecrypt.org/docs/?s=command-line-usage Edit I was also investivating when does this error occures. In order to generate this error you need to follow this steps. 1) execute the command: (note the /q argument at the end for quiet) "C:\Program Files\TrueCrypt\TrueCrypt.exe" /v "a:\volumetruecrypt.tc" /ln /a /p truecrypt /q "C...TrueCrypt.exe" = location where trueCrypt is located /v "path" = location where volume is located /n = drive letter n /p truecrypt = password is "trueCrypt" /q = execute in quiet mode. do not show window note I am mounting to drive letter n 2) now volume should be mounted. 3) Open trueCrypt and manually dismount that volume (without using command line) 4) Attempt to run the same command line (without the /q so you see the error) "C:\Program Files\TrueCrypt\TrueCrypt.exe" /v "a:\volumetruecrypt.tc" /ln /a /p truecrypt 5) an error should show up So the problem ocures when I manually dismount the volume. If I dismount it from the command line I get no errors. But I think this is a bug from trueCrypt

    Read the article

  • Using WSUS Admin Console from outside domain

    - by Nick
    Environment: I have a workstation on our primary domain. We have a primary WSUS Server that is the upstream server of 8 different testing domains. The Primary WSUS server is not part of any domain. Routing is configured between my workstation and the Primary WSUS server. I can RDP to the Primary WSUS sever without any problem. The router is configured to forward any any between my workstation and the Primary WSUS server. This WSUS server cannot be part of a domain due to external requirements (I can't change them) on the lab I work in. The version of WSUS is WSUS 3.0 SP 2 What I want to do: I need to connect to the WSUS server with the WSUS Admin console from my local workstation. The end goal is to connect via Powershell and manage with that. I also need to take what I do here and port it to the 8 test domains so I can manage those WSUS servers. The routing is all in place so I can talk to the servers, it's just connecting to the WSUS console that is causing problems. The problem: I cannot get my workstation to connect to the WSUS Console. I get one of the following errors depending on the setup. 1st error: Cannot connect to 'WSUS'. You do not have the permissions required to access this WSUS server. To connect to the server you must be a member of the WSUS Administrators or WSUS Reporters security groups I also get the warning 7012 from the event log that says the same thing. 2nd error: Cannot connect to 'WSUS'. The server may be using another port or different Secure Sockets Layer setting. What I have tried: So far I have configured IIS for Anonymous Authentication on both the WSUS Administration and ApiRemoting30 using an account will call WSUS_User. With this in place, I get the 1st error. When I do this though, the local WSUS Console cannot be used either. Reverting back to only Windows Authentication allows the local console to work, but the remote console now give the 2nd error. I have confirmed the port, and that there is no SSL in use (which is a policy that is pushed from above, that I cannot effect). I have placed WSUS_User in the groups mentioned above, but it still does not connect. I made sure WSUS_User has full access on C:\Program Files\Update Services and C:\Program Files\Update Services\WebServices I am not very familiar with the workings of WSUS or IIS, and have gone as far as I can figure out on my own. Googling these errors all take me to the same steps about Anonymous Authentication and configuring permissions on folders. Note: I have cross-posted this to StackOverflow as well.

    Read the article

  • How to create a Windows 7 installation usb media from linux ? (to install Windows 7) - Help need to know better method

    - by Abel Coto
    I have been reading some web pages and posts here and in other forums about how to create a Windows 7 installation Usb media (to install windows 7 using a usb) from linux. I asked in technet about this , and they give me general ideas about how to do it I personally am not very familiar with linux, but basicaly all that you need to do... in whatever way you do it is the following: Format a usb flash drive, either fat32 or ntfs create a partition that is large enough to host the windows installation (give or take 3GB for 64bit, aroudn 2.5gb for 32bit) and mark that partition as active/bootable. Since this can be done with windows, but just as well with a tool like gparted, you should be able to do the same in debian. Once you have created that partition, mount the iso that you download, and copy all files starting from the root, into the root of the usb flash drive. That's all there's to it. There is a method that i found in various places,that is almost the same that the man of technet has said. But,there is a step,that in that method is done,that i don't know if it is really necessary,or not. Not allways dd works.Basically, the missing step was to write a proper boot sector to the usb stick, which can be done from linux with ms-sys. This works with the Win7 retail version. Here is the complete rundown again: Install ms-sys Check what device your usb media is asigned - here we will assume it is /dev/sdb. Delete all partitions, create a new one taking up all the space, set type to NTFS, and set it bootable: *# cfdisk /dev/sdb* Create NTFS filesystem: *# mkfs.ntfs -f /dev/sdb1* Mount iso and usb media: *# mount -o loop win7.iso /mnt/iso # mount /dev/sdb1 /mnt/usb* Copy over all files: *# cp -r /mnt/iso/* /mnt/usb/* Write Windows 7 MBR on usb stick: *# ms-sys -7 /dev/sdb* ...and you're done. Shouldn't the usb work without doing the last step "# ms-sys -7 /dev/sdb" or to make the usb bootable , is a must , not only to mark the partition as bootable ? Would be better use rsync instead of cp -r ? All this steps should be done as root, i suppose , or if not , chmod to 664 and chown the directories where are mounted the usb and the iso, no ? But i suppose that the easier thing is to copy the data as root , and that this will not affect to the data. Has anyone tried this method or some similar like copying the iso with dd ?

    Read the article

  • APC (PHP Cache) Uptime 0 minutes, not caching

    - by Jussi
    My goal is to implement APC for opcode cache for a drupal 6 production site. I have so far tested APC with several php files with and without including other php files with include_once. Also tried to tweak the apc.ini values for shm_size, apc.include_once_override and apc.stat. Restarted apache every time. Resulting in apc.php not showing any changes in any values. (except of course the changed apc.ini values are shown as they should) Every time i refresh the apc.php test page, the start time resets as the current time showing uptime 0 minutes. apc.php -testpage shows: General Cache InformationAPC Version 3.1.9 PHP Version 5.2.10 APC Host xxxx.xx.xx Server Software Apache/2.2.3 (CentOS) Shared Memory 1 Segment(s) with 128.0 MBytes (mmap memory, pthread mutex Locks locking) Start Time 2011/07/26 11:53:56 Uptime 0 minutes File Upload Support 1 Cached Files 0 ( 0.0 Bytes) Hits 1 Misses 1 Request Rate (hits, misses) 2.00 cache requests/second Hit Rate 1.00 cache requests/second Miss Rate 1.00 cache requests/second Insert Rate 0.00 cache requests/second Cache full count 0 Cached Variables 0 ( 0.0 Bytes) Hits 0 Misses 0 Request Rate (hits, misses) 0.00 cache requests/second Hit Rate 0.00 cache requests/second Miss Rate 0.00 cache requests/second Insert Rate 0.00 cache requests/second Cache full count 0 apc.cache_by_default 1 apc.canonicalize 1 apc.coredump_unmap 0 apc.enable_cli 0 apc.enabled 1 apc.file_md5 0 apc.file_update_protection 2 apc.filters apc.gc_ttl 3600 apc.include_once_override 0 apc.lazy_classes 0 apc.lazy_functions 0 apc.max_file_size 16 apc.mmap_file_mask /tmp/apcphp5.095eRm apc.num_files_hint 1024 apc.preload_path apc.report_autofilter 0 apc.rfc1867 0 apc.rfc1867_freq 0 apc.rfc1867_name APC_UPLOAD_PROGRESS apc.rfc1867_prefix upload_ apc.rfc1867_ttl 3600 apc.serializer default apc.shm_segments 1 apc.shm_size 128M apc.slam_defense 0 apc.stat 0 apc.stat_ctime 0 apc.ttl 7200 apc.use_request_time 1 apc.user_entries_hint 4096 apc.user_ttl 7200 apc.write_lock 1 Host Status Diagrams: Free: 128.0 MBytes (100.0%) Hits: 1 (50.0%) Used: 20.3 KBytes (0.0%) Misses: 1 (50.0%) Detailed Memory Usage and Fragmentation: Fragmentation: 0% phpinfo shows: Server API CGI/FastCGI APC: Version 3.1.9 APC Debugging Enabled MMAP Support Enabled MMAP File Mask /tmp/apcphp5.JkKDk7 Locking type pthread mutex Locks Serialization Support php Revision $Revision: 308812 $ Build Date Jul 21 2011 14:31:12 I followed these steps to find if suexec settings would prevent caching: http://www.litespeedtech.com/support/forum/showthread.php?t=4189 [root@host /]# ps -ef|grep lsphp root 20402 17833 0 11:21 pts/0 00:00:00 grep lsphp [root@host /]# ps -waux root 17833 0.0 0.1 5004 1484 pts/0 S 10:39 0:00 bash ..indicates that there is no lsphp running on the host also I read the following article and comments, concluding that in my case the problem is not the suexec as the user apache is the httpd process owner http://www.brandonturner.net/blog/2009/07/fastcgi_with_php_opcode_cache/ also suexec command is not recognized when logged and launced as root @ host also i'm almost confident that there is no cPanel running on the host to check if a setting there would reset the running cache process at some interval This leaves me with few clues where to head next. I tried to set (with chown and chgrp) apache as the owner of the apc.php file and some test php files resulting in 500 server error. Is there a way to check if the file permissions prevent the apc stay running? I'm tremendously grateful for any suggestions or help.

    Read the article

  • JNDI Datasource Problem on Tomcat 6, Hibernate

    - by Asuman AKYILDIZ
    I am using Tomcat 6 as application server, Struts-Hibernate and MyEclipse 6.0. My application uses JDBC driver but I should modify it to use JNDI Datasource. I followed steps as described in tomcat 6.0 howto tutorial. I defined my resource in tomcatconf: <Resource name="jdbc/ats" global="jdbc/ats" auth="Container" type="javax.sql.DataSource" driverClassName="oracle.jdbc.OracleDriver" url="jdbc:oracle:thin:@//localhost:1521/MISDEV" username="TEST" password="TEST" maxActive="20" maxIdle="10" maxWait="-1" validationQuery="SELECT 1 from dual" removeAbandoned="true" removeAbandonedTimeout="30" logAbandoned="false"/> I gave reference in my application web.xml: <resource-ref> <description>Oracle Datasource example</description> <res-ref-name>jdbc/ats</res-ref-name> <res-type>javax.sql.DataSource</res-type> <res-auth>Container</res-auth> </resource-ref> And I defined datasource-dialect in my hibernate-cfg.xml <property name="connection.datasource">java:comp/env/jdbc/ats</property> <property name="dialect">org.hibernate.dialect.Oracle9Dialect</property> But when I create hibernate session, it can not open the connection: 09:18:11,322 ERROR JDBCExceptionReporter:72 - Connections could not be acquired from the underlying database! org.hibernate.exception.GenericJDBCException: Cannot open connection I also tried to set the properties at runtime: Configuration configuration = new Configuration(); configuration.setProperty("hibernate.dialect", "org.hibernate.dialect.Oracle9Dialect"); //configuration.setProperty("hibernate.connection.datasource", "java:comp/env/jdbc/ats"); configuration.setProperty("hibernate.current_session_context_class", "thread"); configuration.setProperty("hibernate.connection.provider_class", "org.hibernate.connection.C3P0ConnectionProvider"); configuration.setProperty("hibernate.show_sql", "true"); sessionFactory = configuration.configure().buildSessionFactory(); It does not open connection again. But, when I use JDBC driver it works: Configuration configuration = new Configuration(); configuration.setProperty("hibernate.dialect", "org.hibernate.dialect.Oracle9Dialect"); //configuration.setProperty("hibernate.connection.datasource", "java:comp/env/jdbc/ats"); configuration.setProperty("hibernate.connection.url", "jdbc:oracle:thin:@//localhost:1521/MISDEV"); configuration.setProperty("hibernate.connection.username", "test"); configuration.setProperty("hibernate.connection.password", "test"); configuration.setProperty("hibernate.connection.driver_class", "oracle.jdbc.OracleDriver"); configuration.setProperty("hibernate.transaction.factory_class", "org.hibernate.transaction.JDBCTransactionFactory"); configuration.setProperty("hibernate.current_session_context_class", "thread"); configuration.setProperty("hibernate.connection.provider_class", "org.hibernate.connection.C3P0ConnectionProvider"); configuration.setProperty("hibernate.show_sql", "true"); sessionFactory = configuration.configure().buildSessionFactory(); I have been searching for 3 days and no success. What may be de problem?

    Read the article

< Previous Page | 302 303 304 305 306 307 308 309 310 311 312 313  | Next Page >