Search Results

Search found 3004 results on 121 pages for 'plain'.

Page 49/121 | < Previous Page | 45 46 47 48 49 50 51 52 53 54 55 56  | Next Page >

  • Chef: Load Attributes from encrypted databag in json role

    - by jcvj
    I'm want to use the postfix cookbook for chef. The sasl password is expected to be in an attribute. So usually you would do this: "default_attributes": { "postfix": { "sasl": { "smtp_sasl_passwd": "somepassword" } } } The thing is: I don't want to have the password in the repository in plain text. So I put it in an encrypted data bag. Now I want to access it. This can be done with this: Chef::EncryptedDataBagItem.load("passwords", "postfix")['password'] The problem: This only works in a .rb file, but my role is in json; all my roles are in json! I don't want to change that just for this purpose. Does anybody have an idea what to do here? Help is very appreciated.

    Read the article

  • how to serve php files on a Apache server (localhost) running Coldfusion/MySql?

    - by frequent
    I'm still learning my ways around on my localhost server, whih is running Apache 2.2, Coldfusion8 and MySQL Server 5.5 (on Windows XP). I need to work on a site I inherited, which also ran some PHP scripts under the same setup. I have installed PHP5 on my localhost, but when I open a dummy page with: <?php phpinfo();?> I only get plain text returned, so I guess I haven't configured Apache correctly to also serve PHP (while defaulting to Coldfusion). Question: Where do I need to get started if I want PHP to work on my current setup, too? Is there something I need to add to the httpd.conf file? If possible I don't want to uninstall/reinstall everything, because it took forever to get everything to work (excluding php). Thanks for any pointers!

    Read the article

  • Ubuntu 10.04 Apache Configuration for Websites

    - by completenoob
    Looking at a basic Ubuntu 10.04 server setup, Apache points to /var/www for where to it looks for files to serve up. The default apache user is www. I'm just trying to set up a plain old WordPress blog. Should I just dump the files into /var/www/ as root or www? User www seems inconvenient since I won't log in as the user, but I guess I can chown the files in /var/www to www. Not that I would log in as root either, but what is the recommended user who should own the /var/www files? Thanks for the help.

    Read the article

  • Windows command FOR /F isn't working?

    - by Mark Ransom
    I'm trying to use the FOR command in Windows XP's command line. I have a file temp.txt with 3 lines: temp1 temp2 temp3 And I'm typing the following command at the prompt: for /F %p in (temp.txt) do echo Testing %p Nothing comes back. If I remove the /F parameter, the output is Testing temp.txt. As far as I can determine, I'm using the command exactly as it is documented by Microsoft. I've checked my registry to make sure Command Extensions are on, and even started a new shell with cmd /e:on to be doubly sure. What am I doing wrong? Yes, I was doing something wrong. The file temp.txt wasn't created from scratch, I just edited it to put in my test content. Unfortunately when I created the file the first time, I saved it with a UTF-8 marker at the front. Recreating the file as plain text solved the problem.

    Read the article

  • Xubuntu Terminal overrides shell- and vi color schemes, how to deactivate?

    - by erikb85
    I'm running an up-to-tade Xubuntu and have a real problem with these Terminals. I don't want them to be all that design-ish. Just the plain old Terminal with a black background and green text. And in VIM I want to use my own color scheme. But the xfce-4-terminal doesn't seem to let me do that. It always uses it's own color schemes and they just don't work for all cases (you have like 6 different types of text elements to color, for coding you need more). How can I disable the coloring in the terminal or just load a simple one without all these features?

    Read the article

  • mod_rewrite [L] flag not working as expected?

    - by bobobobo
    I thought the [L] flag indicated that "this rule should be the last rule processed for this http request.." However when I have 2 rules like: RewriteRule ^test$ php/test.php [L] RewriteRule (.*) error.php What always happens is requests to http://localhost/test go to error.php, not to test.php as I expected, since I put the [L] there. If you comment out the second rule there, then requests to http://localhost/test go to test.php as expected. What I'm really trying to do is catch 404 errors with mod_rewrite. Its possible what I'm trying to do is just plain wrong. But I still want to know why the catch-all rule is active since I did put an [L] after the ^test rule. I see a large listing in here where the server admin lists a bunch of paths that begin with the recognized directories, but I wanted to avoid doing this by simply using a nice catch-all rule.

    Read the article

  • Forward external traffic to 127.0.0.1

    - by user2939415
    I have an HTTP server running on 127.0.0.1:8000. How can I use iptables or something to route external traffic to it? I want to be able to access my.ip.addr:8000 from my browser. iptables -A PREROUTING -i eth0 -p tcp --dport 8000 -j REDIRECT --to-ports 8000 does not help EDIT: To test whether or not this works I am using the following node.js script: // Load the http module to create an http server. var http = require('http'); // Configure our HTTP server to respond with Hello World to all requests. var server = http.createServer(function (request, response) { response.writeHead(200, {"Content-Type": "text/plain"}); response.end("Hello World\n"); }); // Listen on port 8000, IP defaults to 127.0.0.1 server.listen(8000, "127.0.0.1"); // Put a friendly message on the terminal console.log("Server running at http://127.0.0.1:8000/");

    Read the article

  • Session Cookies and IE 8

    - by Matt Luongo
    I recently built a simple web-app deployed over Tomcat. The app uses pretty standard session based security where a user who has logged in is given a session. Sessions work fine in Firefox and Chrome, but require the use of jsessionid in the URL for IE (tested 7 & 8), set to medium privacy. In IE 8, I tried to override cookie handling, setting "Allow all 3rd party cookies" and "Allow all session cookies"- no dice. However, when I run Tomcat on my local machine, IE accepts the cookie, and sessions work just fine. And now, for the HTTP headers. From Chrome, a logged in user gets a session GET http://devl:8080/testing/ HTTP/1.1 Host: devl:8080 Connection: keep-alive User-Agent: Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US) AppleWebKit/532.5 (KHTML, like Gecko) Chrome/4.1.249.1036 Safari/532.5 Accept: application/xml,application/xhtml+xml,text/html;q=0.9,text/plain;q=0.8,image/png,*/*;q=0.5 Accept-Encoding: gzip,deflate,sdch Accept-Language: en-US,en;q=0.8 Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.3 HTTP/1.1 200 OK Server: Apache-Coyote/1.1 P3P: CP="NON CURa ADMa DEVa TAIa OUR BUS IND UNI COM NAV INT STA" Set-Cookie: JSESSIONID=9280023BCE2046F32B13C89130CBC397; Path=/testing Content-Type: text/html;charset=UTF-8 Content-Language: en-US Content-Length: 2450 Date: Fri, 26 Mar 2010 14:14:40 GMT GET http://devl:8080/testing/logout HTTP/1.1 Host: devl:8080 Connection: keep-alive User-Agent: Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US) AppleWebKit/532.5 (KHTML, like Gecko) Chrome/4.1.249.1036 Safari/532.5 Referer: http://devl:8080/testing/ Accept: application/xml,application/xhtml+xml,text/html;q=0.9,text/plain;q=0.8,image/png,*/*;q=0.5 Accept-Encoding: gzip,deflate,sdch Accept-Language: en-US,en;q=0.8 Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.3 Cookie: JSESSIONID=9280023BCE2046F32B13C89130CBC397 ... From IE 8, with standard medium level security and privacy- GET http://devl:8080/testing/ HTTP/1.1 Accept: application/x-ms-application, image/jpeg, application/xaml+xml, image/gif, image/pjpeg, application/x-ms-xbap, */* Accept-Language: en-US User-Agent: Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 6.1; Win64; x64; Trident/4.0; .NET CLR 2.0.50727; SLCC2; .NET CLR 3.5.30729; .NET CLR 3.0.30729; Media Center PC 6.0; MDDC; Tablet PC 2.0) UA-CPU: AMD64 Accept-Encoding: gzip, deflate Host: devl:8080 Connection: Keep-Alive HTTP/1.1 200 OK Server: Apache-Coyote/1.1 P3P: CP="NON CURa ADMa DEVa TAIa OUR BUS IND UNI COM NAV INT STA" Set-Cookie: JSESSIONID=192999F922D6E9C868314452726764BA; Path=/testing Content-Type: text/html;charset=UTF-8 Content-Language: en-US Content-Length: 2450 Date: Fri, 26 Mar 2010 14:32:34 GMT GET http://devl:8080/testing/logout HTTP/1.1 Accept: application/x-ms-application, image/jpeg, application/xaml+xml, image/gif, image/pjpeg, application/x-ms-xbap, */* Referer: http://devl:8080/testing/;jsessionid=6371A83EFE39A46997544F9146AA5CEA Accept-Language: en-US User-Agent: Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 6.1; Win64; x64; Trident/4.0; .NET CLR 2.0.50727; SLCC2; .NET CLR 3.5.30729; .NET CLR 3.0.30729; Media Center PC 6.0; MDDC; Tablet PC 2.0) UA-CPU: AMD64 Accept-Encoding: gzip, deflate Connection: Keep-Alive Host: devl:8080 ... I thought it might be P3P, but on adding a compact policy, nothing changes. This is the standard Tomcat session, so I'm really surprised I haven't been able to find other people with the same problem so far. Anyone have any ideas?

    Read the article

  • Ruby Rails Mongrel Sever failing to serve OXS1.6

    - by Mark V
    Hi there I'm fairly new to Rails and the Mac, and doing my first deploy... I'm trying to set up my rails app on a brand new Apple mini-server running OXS1.6 (Snow Leopard). It is currently running fine on my new iMac i7 (same OS). I start mongrel with this command: mongrel_rails start -e production -p 3000 -d -a 127.0.0.1 --debug And it starts giving this output in the log/mongrel.log ** Daemonized, any open files are closed. Look at log/mongrel.pid and log/mongrel.log for info. ** Starting Mongrel listening at 127.0.0.1:3000 ** Installing debugging prefixed filters. Look in log/mongrel_debug for the files. ** Starting Rails with production environment... /Library/Ruby/Gems/1.8/gems/rails-2.3.5/lib/rails/gem_dependency.rb:119:Warning: Gem::Dependency#version_requirements is deprecated and will be removed on or after August 2010. Use #requirement /Users/danadmin/ServiceApp/ServiceApp/app/helpers/input_grid_manager.rb:9: warning: already initialized constant ID_PREFIX /Users/danadmin/ServiceApp/ServiceApp/app/helpers/input_grid_manager.rb:10: warning: already initialized constant ADD_ID ** Rails loaded. ** Loading any Rails specific GemPlugins ** Signals ready. TERM => stop. USR2 => restart. INT => stop (no restart). ** Rails signals registered. HUP => reload (without restart). It might not work well. ** Mongrel 1.1.5 available at 127.0.0.1:3000 ** Writing PID file to log/mongrel.pid The output is the same on my dev iMac (including the warnings). The difference is that accessing http://127.0.0.1:3000 on my iMac serves up the app's login page. Where as on the mac mini-server accessing the same results in this error 500 text from mongrel: "We're sorry, but something went wrong." It's as if rails is not working. I'm pretty good at figuring things out if I have some log file messages to direct me, but mongrel.log has no error message (the output remains the same as above), and the log/production.log is empty (which makes me think rails has not started?). My gems are all the same versions between machines and so is the app code; and there are no clues I can see in any of the mongrel_debug logs, except that rails.log on the mac mini-server and the iMac are different. After a start and single access, first is the rails.log from the mac mini-server: D, [2010-04-15T13:45:34.870406 #6914] DEBUG -- : TRACING ON Thu Apr 15 13:45:34 +1200 2010 Thu Apr 15 13:46:08 +1200 2010 REQUEST / --- !map:Mongrel::HttpParams SERVER_NAME: 127.0.0.1 HTTP_ACCEPT: application/xml,application/xhtml+xml,text/html;q=0.9,text/plain;q=0.8,image/png,*/*;q=0.5 HTTP_CACHE_CONTROL: max-age=0 HTTP_HOST: 127.0.0.1:3000 HTTP_USER_AGENT: Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10_6_0; en-US) AppleWebKit/533.2 (KHTML, like Gecko) Chrome/5.0.342.9 Safari/533.2 REQUEST_PATH: / SERVER_PROTOCOL: HTTP/1.1 HTTP_ACCEPT_LANGUAGE: en-US,en;q=0.8 REMOTE_ADDR: 127.0.0.1 PATH_INFO: / SERVER_SOFTWARE: Mongrel 1.1.5 SCRIPT_NAME: / HTTP_VERSION: HTTP/1.1 REQUEST_URI: / SERVER_PORT: "3000" HTTP_ACCEPT_CHARSET: ISO-8859-1,utf-8;q=0.7,*;q=0.3 REQUEST_METHOD: GET GATEWAY_INTERFACE: CGI/1.2 HTTP_ACCEPT_ENCODING: gzip,deflate,sdch HTTP_CONNECTION: keep-alive While on my iMac it seems the same except for the addition of the HTTP_COOKIE and the HTTP_IF_NONE_MATCH, here is rails.log from my iMac # Logfile created on Thu Apr 15 13:41:42 +1200 2010 by logger.rb/22285 D, [2010-04-15T13:41:42.934088 #2070] DEBUG -- : TRACING ON Thu Apr 15 13:41:42 +1200 2010 Thu Apr 15 13:42:05 +1200 2010 REQUEST / --- !map:Mongrel::HttpParams SERVER_NAME: 127.0.0.1 HTTP_ACCEPT: application/xml,application/xhtml+xml,text/html;q=0.9,text/plain;q=0.8,image/png,*/*;q=0.5 HTTP_HOST: 127.0.0.1:3000 HTTP_USER_AGENT: Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10_6_3; en-US) AppleWebKit/533.2 (KHTML, like Gecko) Chrome/5.0.342.9 Safari/533.2 REQUEST_PATH: / SERVER_PROTOCOL: HTTP/1.1 HTTP_IF_NONE_MATCH: "\"216cc63ce3c1f286ef8dd4f18f354f6e\"" HTTP_ACCEPT_LANGUAGE: en-US,en;q=0.8 REMOTE_ADDR: 127.0.0.1 PATH_INFO: / SERVER_SOFTWARE: Mongrel 1.1.5 SCRIPT_NAME: / HTTP_COOKIE: _ServiceApp_session=BAh7DDonY3VzdG9tZXJfbGlzdF9maWx0ZXJfam9iX3N0YXR1c19pZGn6Og9zZXNzaW9uX2lkIiU0ZTk1ZWZjMmViMGU3NjE2YzA0NDc2YTkxYzJlNDZiOToaY3VycmVudF9jdXN0b21lcl9uYW1lIilUSEUgQ1VTVE9NRVIgTkFNRSBORUVEUyBUTyBCRSBMT0FERUQ6EF9jc3JmX3Rva2VuIjFuT1JMUWk0NlZrWlM3c2lUN3BaWCs5NkhRajhxYnFwRnhzVHVTWXEvUWY0PToZam9iX2xpc3RfZmlsdGVyX3RleHQiADogam9iX2xpc3RfZmlsdGVyX2VtcGxveWVlX2lkafo6HmN1c3RvbWVyX2xpc3RfZmlsdGVyX3RleHQiAA%3D%3D--d01bc5d0b457ad524d16cb3402b5dfed9afce83d HTTP_VERSION: HTTP/1.1 REQUEST_URI: / SERVER_PORT: "3000" HTTP_ACCEPT_CHARSET: ISO-8859-1,utf-8;q=0.7,*;q=0.3 REQUEST_METHOD: GET GATEWAY_INTERFACE: CGI/1.2 HTTP_ACCEPT_ENCODING: gzip,deflate,sdch HTTP_CONNECTION: keep-alive Any direction or ideas would be greatly appreciated. Thanks.

    Read the article

  • Adding functionality to any TextReader

    - by strager
    I have a Location class which represents a location somewhere in a stream. (The class isn't coupled to any specific stream.) The location information will be used to match tokens to location in the input in my parser, to allow for nicer error reporting to the user. I want to add location tracking to a TextReader instance. This way, while reading tokens, I can grab the location (which is updated by the TextReader as data is read) and give it to the token during the tokenization process. I am looking for a good approach on accomplishing this goal. I have come up with several designs. Manual location tracking Every time I need to read from the TextReader, I call AdvanceString on the Location object of the tokenizer with the data read. Advantages Very simple. No class bloat. No need to rewrite the TextReader methods. Disadvantages Couples location tracking logic to tokenization process. Easy to forget to track something (though unit testing helps with this). Bloats existing code. Plain TextReader wrapper Create a LocatedTextReaderWrapper class which surrounds each method call, tracking a Location property. Example: public class LocatedTextReaderWrapper : TextReader { private TextReader source; public Location Location { get; set; } public LocatedTextReaderWrapper(TextReader source) : this(source, new Location()) { } public LocatedTextReaderWrapper(TextReader source, Location location) { this.Location = location; this.source = source; } public override int Read(char[] buffer, int index, int count) { int ret = this.source.Read(buffer, index, count); if(ret >= 0) { this.location.AdvanceString(string.Concat(buffer.Skip(index).Take(count))); } return ret; } // etc. } Advantages Tokenization doesn't know about Location tracking. Disadvantages User needs to create and dispose a LocatedTextReaderWrapper instance, in addition to their TextReader instance. Doesn't allow different types of tracking or different location trackers to be added without layers of wrappers. Event-based TextReader wrapper Like LocatedTextReaderWrapper, but decouples it from the Location object raising an event whenever data is read. Advantages Can be reused for other types of tracking. Tokenization doesn't know about Location tracking or other tracking. Can have multiple, independent Location objects (or other methods of tracking) tracking at once. Disadvantages Requires boilerplate code to enable location tracking. User needs to create and dispose the wrapper instance, in addition to their TextReader instance. Aspect-orientated approach Use AOP to perform like the event-based wrapper approach. Advantages Can be reused for other types of tracking. Tokenization doesn't know about Location tracking or other tracking. No need to rewrite the TextReader methods. Disadvantages Requires external dependencies, which I want to avoid. I am looking for the best approach in my situation. I would like to: Not bloat the tokenizer methods with location tracking. Not require heavy initialization in user code. Not have any/much boilerplate/duplicated code. (Perhaps) not couple the TextReader with the Location class. Any insight into this problem and possible solutions or adjustments are welcome. Thanks! (For those who want a specific question: What is the best way to wrap the functionality of a TextReader?) I have implemented the "Plain TextReader wrapper" and "Event-based TextReader wrapper" approaches and am displeased with both, for reasons mentioned in their disadvantages.

    Read the article

  • Fonts, goes back to default size

    - by Bladimir Ruiz
    Every time I change the font, it goes back to the default size, which is 12, even if I change it before with the "Tamano" menu, it only goes back to 12 every time, my guess would be the way I change the size with deriveFont(), but don't I now any other way to change it. public static class cambiar extends JFrame { public cambiar() { final Font aryal = new Font("Comic Sans MS", Font.PLAIN, 12); JFrame ventana = new JFrame("Cambios en el Texto!"); JPanel adentro = new JPanel(); final JLabel texto = new JLabel("Texto a Cambiar!"); texto.setFont(aryal); JMenuBar menu = new JMenuBar(); JMenu fuentes = new JMenu("Fuentes"); /* Elementos de Fuentes */ JMenuItem arial = new JMenuItem("Arial"); arial.addActionListener(new ActionListener() { public void actionPerformed(ActionEvent e) { Font arrrial = new Font("Arial", Font.PLAIN, 12); float tam = (float) texto.getFont().getSize(); String hola = String.valueOf(tam); texto.setFont(arrrial); texto.setFont(texto.getFont().deriveFont(tam)); } }); fuentes.add(arial); /* FIN Fuentes */ JMenu tamano = new JMenu("Tamano"); /* Elementos de Tamano */ JMenuItem font13 = new JMenuItem("13"); font13.addActionListener(new ActionListener() { public void actionPerformed(ActionEvent e) { texto.setFont(texto.getFont().deriveFont(23.0f)); } }); JMenuItem font14 = new JMenuItem("14"); arial.addActionListener(new ActionListener() { public void actionPerformed(ActionEvent e) { texto.setFont(aryal); } }); JMenuItem font15 = new JMenuItem("15"); arial.addActionListener(new ActionListener() { public void actionPerformed(ActionEvent e) { texto.setFont(aryal); } }); JMenuItem font16 = new JMenuItem("16"); arial.addActionListener(new ActionListener() { public void actionPerformed(ActionEvent e) { texto.setFont(aryal); } }); JMenuItem font17 = new JMenuItem("17"); arial.addActionListener(new ActionListener() { public void actionPerformed(ActionEvent e) { texto.setFont(aryal); } }); JMenuItem font18 = new JMenuItem("18"); arial.addActionListener(new ActionListener() { public void actionPerformed(ActionEvent e) { texto.setFont(aryal); } }); JMenuItem font19 = new JMenuItem("19"); arial.addActionListener(new ActionListener() { public void actionPerformed(ActionEvent e) { texto.setFont(aryal); } }); JMenuItem font20 = new JMenuItem("20"); arial.addActionListener(new ActionListener() { public void actionPerformed(ActionEvent e) { texto.setFont(aryal); } }); tamano.add(font13); /* FIN tanano */ JMenu tipo = new JMenu("Tipo"); /* Elementos de tipo */ /* FIN tipo */ /* Elementos del JMENU */ menu.add(fuentes); menu.add(tamano); menu.add(tipo); /* FIN JMENU */ /* Elementos del JPanel */ adentro.add(menu); adentro.add(texto); /* FIN JPanel */ /* Elementos del JFRAME */ ventana.add(adentro); ventana.setVisible(true); ventana.setSize(250, 250); /* FIN JFRAME */ } } Thanks in Advance!

    Read the article

  • How to find problem with PHP XSLTProcessor when return from transformToXML is false and libxml_get_

    - by John
    I'm working on the code below to allow HTTP user agents that cannot perform XSL transformations to view the resources on my server. I'm mystified because the result of transformToXML is false, but the result of libxml_get_errors() is an empty array. As you can see, the code outputs the LibXSLT version ID and I'm getting the problem on WinVista with version 1.1.24. Is libxml_get_errors() not the right function to get the errors from the XSLTProcessor object? If you're interested in the XML documents, you can get them from http://bobberinteractive.com/index.xhtml and .../stylesheets/layout.xsl <?php //redirect browsers that can handle the source files. if (strpos ( $_SERVER ['HTTP_ACCEPT'], 'application/xhtml+xml' )) { header ( "HTTP/1.1 301 Moved Permanently" ); header ( "Location: http://" . $_SERVER ['SERVER_NAME'] . "/index.xhtml" ); header ( "Content-Type: text/text" ); echo "\nYour browser is capable of processing the <a href='/index.xhtml' site contents on its own."; die (); } //start by checking the template $baseDir = dirname ( __FILE__ ); $xslDoc = new DOMDocument (); if (! $xslDoc-load ( $baseDir . '/stylesheets/layout.xsl' )) { header ( "HTTP/1.1 500 Server Error" ); header ( "Content-Type: text/plain" ); echo "\n Can't load " . $baseDir . '/stylesheets/layout.xsl'; die (); } //resolve the requested resource (browsers that need transformation will request the resource without the suffix) $uri = $_SERVER ['REQUEST_URI']; $len = strlen ( $uri ); if (1 = $len || '/' == substr ( $uri, $len - 1 )) { $fileName = $baseDir . "/index.xhtml"; // use 'default' document if pathname ends in '/' } else { $fileName = $baseDir . (1 load ( $fileName )) { header ( "HTTP/1.1 500 Server Error" ); echo "\n Can't load " . $fileName; die (); } // now start the XSL template processing $proc = new XSLTProcessor (); $proc-importStylesheet ( $xslDoc ); $doc = $proc-transformToXML ( $xmlDoc ); if (false === $doc) { header ( "HTTP/1.1 500 Server Error" ); header ( "Content-Type: text/plain" ); echo "\n"; // HERE is where it gets strange: the value of $doc is false and libxml_get_errors returns 0 entries. display_xml_errors ( libxml_get_errors() ); die (); } header ( "Content-Type: text/html" ); echo "\n"; echo $doc; function display_xml_errors($errors) { echo count ( $errors ) . " Error(s) from LibXSLT " . LIBXSLT_DOTTED_VERSION; for($i = 0; $i level) { case LIBXML_ERR_WARNING : $return .= "Warning $error-code: "; break; case LIBXML_ERR_ERROR : $return .= "Error $error-code: "; break; case LIBXML_ERR_FATAL : $return .= "Fatal Error $error-code: "; break; } $return .= trim ( $error-message ) . "\n Line: $error-line" . "\n Column: $error-column"; if ($error-file) { $return .= "\n File: $error-file"; } echo "$return\n\n--------------------------------------------\n\n"; } }

    Read the article

  • Did I find a bug in PHP's `crypt()`?

    - by Nathan Long
    I think I may have found a bug in PHP's crypt() function under Windows. However: I recognize that it's probably my fault. PHP is used by millions and worked on by thousands; my code is used by tens and worked on by me. (This argument is best explained on Coding Horror.) So I'm asking for help: show me my fault. I've been trying to find it for a few days now, with no luck. The setup I'm using a Windows server installation with Apache 2.2.14 (Win32) and PHP 5.3.2. My development box runs Windows XP Professional; the 'production' server (this is an intranet setup) runs Windows Storage Server 2003. The problem happens on both. I don't see anything in php.ini related to crypt(), but will happily answer questions about my config. The problem Several scripts in my PHP app occasionally hang: the page sits there on 'waiting for localhost' and never finishes. Each of these scripts uses crypt to hash a user's password before storing it in the database, or, in the case of the login page, to hash the entered password before comparing it to the version stored in the database. Since the login page is the simplest, I focused on it for testing. I repeatedly logged in, and found that it would hang maybe 4 out of 10 times. As an experiment, I changed the login page to use the plain text password and changed my password in the database to its plain text version. The page stopped hanging. I saw that PHP's latest version lists this bugfix: Fixed bug #51059 (crypt crashes when invalid salt are [sic] given). So I created a very simple test script, as follows, using the same salt given in an official example: $foo = crypt('rasmuslerdorf','r1'); echo $foo; This page, too, will hang, if I reload it like crazy. I only see it hanging in Chrome, but regardless of browser, the effect on Apache is the same. Effect on Apache When these pages hang, Apache's server-status page (which I explained here, regarding a different problem) increments the number of requests being processed and decrements the number of idle workers. The requests being processed almost all have a status of 'Sending Reply,' though sometimes for a moment they will show either 'Reading request' or 'keepalive (read).' Eventually, Apache may crash. When it does, the Windows crash report looks like this: szAppName: httpd.exe szAppVer: 2.2.14.0 szModName: php5ts.dll szModVer: 5.3.1.0 // OK, this report was before I upgraded to PHP 5.3.2, // but that didn't fix it offset: 00a2615 Is it my fault? I'm tempted to file a bug report to PHP on this. The argument against it is, as stated above, that bugs are nearly always my fault. However, my argument in favor of 'it's PHP's fault' is: I'm using Windows, whereas most servers use Linux (I don't get to choose this), so the chances are greater that I've found an edge case There was recently a bug with crypt(), so maybe it still has issues I have made the simplest test case I can, and I still have the problem Can anyone duplicate this? Can you suggest where I've gone wrong? Should I file the bug after all? Thanks in advance for any help you may give.

    Read the article

  • Slicing a time range into parts

    - by beporter
    First question. Be gentle. I'm working on software that tracks technicians' time spent working on tasks. The software needs to be enhanced to recognize different billable rate multipliers based on the day of the week and the time of day. (For example, "Time and a half after 5 PM on weekdays.") The tech using the software is only required to log the date, his start time and his stop time (in hours and minutes). The software is expected to break the time entry into parts at the boundaries of when the rate multipliers change. A single time entry is not permitted to span multiple days. Here is a partial sample of the rate table. The first-level array keys are the days of the week, obviously. The second-level array keys represent the time of the day when the new multiplier kicks in, and runs until the next sequential entry in the array. The array values are the multiplier for that time range. [rateTable] => Array ( [Monday] => Array ( [00:00:00] => 1.5 [08:00:00] => 1 [17:00:00] => 1.5 [23:59:59] => 1 ) [Tuesday] => Array ( [00:00:00] => 1.5 [08:00:00] => 1 [17:00:00] => 1.5 [23:59:59] => 1 ) ... ) In plain English, this represents a time-and-a-half rate from midnight to 8 am, regular rate from 8 to 5 pm, and time-and-a-half again from 5 till 11:59 pm. The time that these breaks occur may be arbitrary to the second and there can be an arbitrary number of them for each day. (This format is entirely negotiable, but my goal is to make it as easily human-readable as possible.) As an example: a time entry logged on Monday from 15:00:00 (3 PM) to 21:00:00 (9 PM) would consist of 2 hours billed at 1x and 4 hours billed at 1.5x. It is also possible for a single time entry to span multiple breaks. Using the example rateTable above, a time entry from 6 AM to 9 PM would have 3 sub-ranges from 6-8 AM @ 1.5x, 8AM-5PM @ 1x, and 5-9 PM @ 1.5x. By contrast, it's also possible that a time entry may only be from 08:15:00 to 08:30:00 and be entirely encompassed in the range of a single multiplier. I could really use some help coding up some PHP (or at least devising an algorithm) that can take a day of the week, a start time and a stop time and parse into into the required subparts. It would be ideal to have the output be an array that consists of multiple entries for a (start,stop,multiplier) triplet. For the above example, the output would be: [output] => Array ( [0] => Array ( [start] => 15:00:00 [stop] => 17:00:00 [multiplier] => 1 ) [1] => Array ( [start] => 17:00:00 [stop] => 21:00:00 [multiplier] => 1.5 ) ) I just plain can't wrap my head around the logic of splitting a single (start,stop) into (potentially) multiple subparts.

    Read the article

  • MySQL and INT auto_increment fields

    - by PHPguy
    Hello folks, I'm developing in LAMP (Linux+Apache+MySQL+PHP) since I remember myself. But one question was bugging me for years now. I hope you can help me to find an answer and point me into the right direction. Here is my challenge: Say, we are creating a community website, where we allow our users to register. The MySQL table where we store all users would look then like this: CREATE TABLE `users` ( `uid` int(2) unsigned NOT NULL auto_increment COMMENT 'User ID', `name` varchar(20) NOT NULL, `password` varchar(32) NOT NULL COMMENT 'Password is saved as a 32-bytes hash, never in plain text', `email` varchar(64) NOT NULL, `created` int(11) unsigned NOT NULL default '0' COMMENT 'Timestamp of registration', `updated` int(11) unsigned NOT NULL default '0' COMMENT 'Timestamp of profile update, e.g. change of email', PRIMARY KEY (`uid`) ) ENGINE=MyISAM DEFAULT CHARSET=utf8; So, from this snippet you can see that we have a unique and automatically incrementing for every new user 'uid' field. As on every good and loyal community website we need to provide users with possibility to completely delete their profile if they want to cancel their participation in our community. Here comes my problem. Let's say we have 3 registered users: Alice (uid = 1), Bob (uid = 2) and Chris (uid = 3). Now Bob want to delete his profile and stop using our community. If we delete Bob's profile from the 'users' table then his missing 'uid' will create a gap which will be never filled again. In my opinion it's a huge waste of uid's. I see 3 possible solutions here: 1) Increase the capacity of the 'uid' field in our table from SMALLINT (int(2)) to, for example, BIGINT (int(8)) and ignore the fact that some of the uid's will be wasted. 2) introduce the new field 'is_deleted', which will be used to mark deleted profiles (but keep them in the table, instead of deleting them) to re-utilize their uid's for newly registered users. The table will look then like this: CREATE TABLE `users` ( `uid` int(2) unsigned NOT NULL auto_increment COMMENT 'User ID', `name` varchar(20) NOT NULL, `password` varchar(32) NOT NULL COMMENT 'Password is saved as a 32-bytes hash, never in plain text', `email` varchar(64) NOT NULL, `is_deleted` int(1) unsigned NOT NULL default '0' COMMENT 'If equal to "1" then the profile has been deleted and will be re-used for new registrations', `created` int(11) unsigned NOT NULL default '0' COMMENT 'Timestamp of registration', `updated` int(11) unsigned NOT NULL default '0' COMMENT 'Timestamp of profile update, e.g. change of email', PRIMARY KEY (`uid`) ) ENGINE=MyISAM DEFAULT CHARSET=utf8; 3) Write a script to shift all following user records once a previous record has been deleted. E.g. in our case when Bob (uid = 2) decides to remove his profile, we would replace his record with the record of Chris (uid = 3), so that uid of Chris becomes qual to 2 and mark (is_deleted = '1') the old record of Chris as vacant for the new users. In this case we keep the chronological order of uid's according to the registration time, so that the older users have lower uid's. Please, advice me now which way is the right way to handle the gaps in the auto_increment fields. This is just one example with users, but such cases occur very often in my programming experience. Thanks in advance!

    Read the article

  • urllib2 misbehaving with dynamically loaded content

    - by Sheena
    Some Code headers = {} headers['user-agent'] = 'User-Agent: Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:16.0) Gecko/20100101 Firefox/16.0' headers['Accept'] = 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8' headers['Accept-Language'] = 'en-gb,en;q=0.5' #headers['Accept-Encoding'] = 'gzip, deflate' request = urllib.request.Request(sURL, headers = headers) try: response = urllib.request.urlopen(request) except error.HTTPError as e: print('The server couldn\'t fulfill the request.') print('Error code: {0}'.format(e.code)) except error.URLError as e: print('We failed to reach a server.') print('Reason: {0}'.format(e.reason)) else: f = open('output/{0}.html'.format(sFileName),'w') f.write(response.read().decode('utf-8')) A url http://groupon.cl/descuentos/santiago-centro The situation Here's what I did: enable javascript in browser open url above and keep an eye on the console disable javascript repeat step 2 use urllib2 to grab the webpage and save it to a file enable javascript open the file with browser and observe console repeat 7 with javascript off results In step 2 I saw that a whole lot of the page content was loaded dynamically using ajax. So the HTML that arrived was a sort of skeleton and ajax was used to fill in the gaps. This is fine and not at all surprising Since the page should be seo friendly it should work fine without js. in step 4 nothing happens in the console and the skeleton page loads pre-populated rendering the ajax unnecessary. This is also completely not confusing in step 7 the ajax calls are made but fail. this is also ok since the urls they are using are not local, the calls are thus broken. The page looks like the skeleton. This is also great and expected. in step 8: no ajax calls are made and the skeleton is just a skeleton. I would have thought that this should behave very much like in step 4 question What I want to do is use urllib2 to grab the html from step 4 but I cant figure out how. What am I missing and how could I pull this off? To paraphrase If I was writing a spider I would want to be able to grab plain ol' HTML (as in that which resulted in step 4). I dont want to execute ajax stuff or any javascript at all. I don't want to populate anything dynamically. I just want HTML. The seo friendly site wants me to get what I want because that's what seo is all about. How would one go about getting plain HTML content given the situation I outlined? To do it manually I would turn off js, navigate to the page and copy the html. I want to automate this. stuff I've tried I used wireshark to look at packet headers and the GETs sent off from my pc in steps 2 and 4 have the same headers. Reading about SEO stuff makes me think that this is pretty normal otherwise techniques such as hijax wouldn't be used. Here are the headers my browser sends: Host: groupon.cl User-Agent: Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:16.0) Gecko/20100101 Firefox/16.0 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: en-gb,en;q=0.5 Accept-Encoding: gzip, deflate Connection: keep-alive Here are the headers my script sends: Accept-Encoding: identity Host: groupon.cl Accept-Language: en-gb,en;q=0.5 Connection: close Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 User-Agent: User-Agent: Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:16.0) Gecko/20100101 Firefox/16.0 The differences are: my script has Connection = close instead of keep-alive. I can't see how this would cause a problem my script has Accept-encoding = identity. This might be the cause of the problem. I can't really see why the host would use this field to determine the user-agent though. If I change encoding to match the browser request headers then I have trouble decoding it. I'm working on this now... watch this space, I'll update the question as new info comes up

    Read the article

  • Has Javascript developed beyond what it was originally designed to do?

    - by Elliot Bonneville
    I've been talking with a friend about the purpose of Javascript, when and how it should be used, etc. He quoted that: JavaScript was designed to add interactivity to HTML pages [...] JavaScript gives HTML designers a programming tool HTML authors are normally not programmers, but JavaScript is a scripting language with a very simple syntax! Almost anyone can put small "snippets" of code into their HTML pages JavaScript can react to events A JavaScript can be set to execute when something happens, like when a page has finished loading or when a user clicks on an HTML element JavaScript can read and write HTML elements A JavaScript can read and change the content of an HTML element JavaScript can be used to validate data A JavaScript can be used to validate form data before it is submitted to a server. This saves the server from extra processing JavaScript can be used to detect the visitor's browser - A JavaScript can be used to detect the visitor's browser, and - depending on the browser - load another page specifically designed for that browser. JavaScript can be used to create cookies - A JavaScript can be used to store and retrieve information on the visitor's computer. However, it seems like Javascript's getting used to do a lot more than these days. My friend also advocates against using Javascript's OOP functionality, claiming that "you shouldn't be processing data, merely validating." Is Javascript really limited to validating data and making flashy graphics on a web page? He goes on to claim "you shouldn't be attempting to access databases through javascript" and also says " in general you don't want to be doing your heavy lifting in javascript". I can't say I agree with his opinion, but I'd like to get some more input on this. So, my question: Has Javascript evolved from the definition above to something more powerful, has the way we use it changed, or am I just plain wrong? While I realize this is a subjective question, I can't find any more information on it, so a few links would be good, if nothing else. I'm not looking for a debate, just an answer.

    Read the article

  • SVN: Working with branches using the same working copy

    - by uXuf
    We've just moved to SVN from CVS. We have a small team and everyone checks in code on the trunk and we have never ever used branches for development. We each have directories on a remote dev server with the codebase checked out. Each developer works on their own sandbox with an associated URL to pull up the app in a browser (something like the setup here: Trade-offs of local vs remote development workflows for a web development team). I've decided that for my current project, I'll use a branch because it would span multiple releases. I've already cut a branch out, but I am using the same directory as the one originally checked out (i.e. for the trunk). Since it's the same directory (or working copy) for both the branch and the trunk, if for e.g. a bug pops up in the app I switch to the trunk and commit the change there, and then switch back to my branch for my project development. My questions are: Is this a sane way to work with branches? Are there any pitfalls that I need to be aware of? What would be the optimal way to work with branches if separate working copies are out of the question? I haven't had issues yet as I have just started doing this way but all the tutorials/books/blog posts I have seen about branching with SVN imply working with different working copies (or perhaps I haven't come across an explanation of mixed working copies in plain English). I just don't want to be sorry three months down the road when its time to integrate the branch back to the trunk.

    Read the article

  • Twitter gem - undefined method `stringify_keys’

    - by Piet
    Have you been getting the following errors when running the Twitter gem lately ? /usr/local/lib/ruby/gems/1.8/gems/httparty-0.4.3/lib/httparty/response.rb:15:in `send': undefined method `stringify_keys' for # (NoMethodError) from /usr/local/lib/ruby/gems/1.8/gems/httparty-0.4.3/lib/httparty/response.rb:15:in `method_missing’ from /usr/local/lib/ruby/gems/1.8/gems/mash-0.0.3/lib/mash.rb:131:in `deep_update’ from /usr/local/lib/ruby/gems/1.8/gems/mash-0.0.3/lib/mash.rb:50:in `initialize’ from /usr/local/lib/ruby/gems/1.8/gems/twitter-0.6.13/lib/twitter/search.rb:101:in `new’ from /usr/local/lib/ruby/gems/1.8/gems/twitter-0.6.13/lib/twitter/search.rb:101:in `fetch’ from test.rb:26 It’s because Twitter has been sending back plain text errors that are treated as a string instead of json and can’t be properly ‘Mashed’ by the Twitter gem. Also check http://github.com/jnunemaker/twitter/issues#issue/6. Without diving into the bowels of the Twitter gem or HTTParty, you could ‘begin…rescue’ this error and try again in 5 minutes. I fixed it by overriding the offending code to return nil and checking for a nil response as follows: module Twitter class Search def fetch(force=false) if @fetch.nil? || force query = @query.dup query[:q] = query[:q].join(' ') query[:format] = 'json' #This line is the hack and whole reason we're monkey-patching at all. response = self.class.get('http://search.twitter.com/search', :query => query, :format => :json) #Our patch: response should be a Hash. If it isnt, return nil. return nil if response.class != Hash @fetch = Mash.new(response) end @fetch end end end (adapted from http://github.com/jnunemaker/twitter/issues#issue/9) If you have a better solution: speak up!

    Read the article

  • Letter to Ballmer: Making Better Consumer Devices

    - by andrewbrust
    Last year, I wrote Steve Ballmer an email, and he was kind enough to write me back.  The email contained a scan of a column I wrote praising Microsoft’s BI strategy.  His reply contained three simple words: “Super nice  thanks.” Well, now I’d like to write to Steve again, in an open letter format, and this time the love may be a bit tougher.  But I’m still super earnest. The past two days have been eventful ones for Microsoft: The company announced the departure of company veterans Robbie Bach and J Allard and the market announced Apple is now besting Microsoft in market capitalization. Plus, announcements were made that make it plain that Ballmer will, in effect, be running Microsoft’s Entertainment & Devices division himself. With that in mind, I’d like to offer my list of a dozen things I think Microsoft’s CEO should do to improve that division’s offerings and, hopefully, its bottom line. So here goes:   1. On Windows Phone 7, Stay the Course The press is teeming with headlines and reader comments proclaiming the death-before-arrival of Windows Phone 7.  That’s plain silly.  You’ve got the makings of a great and unique SmartPhone platform, and you’re the only company (even considering RIM) that can offer full fidelity Exchange integration, not to mention implementing Office on the device.  Let the existing team finish this puppy and ship it. And then have them pump out a few updates, over-the-air, quickly.  Show them that Google Android’s not the only product that can do good, rapid dot releases. And another thing: make sure your OEMs’ devices have flawless touch screens.  If they don’t, then you shouldn’t certify them for delivery to customers.  Period. Oh, and kill the Kin, quietly.  It was DOA, and you know it.   2. Move Media Center to the Xbox Platform Media Center is, at its core, a good product.  But delivering a media distribution and DVR platform on a sophisticated PC operating system like Windows 7 just creates too many moving parts.  Xbox already functions as the best Media Center extender device – it should actually be the hub as well. Media Center is mostly based on .NET code – and XNA is a .NET environment for Xbox – find a way to bridge that small gap and make Media Center a joy to work with instead of a frustration.  Beating Apple TV out of this sub-market is the lowest hanging fruit on the tree (goofy pun, but it’s true).   3. Integrate Media Center with Mediaroom, or Kill the Latter You have two media products with almost identical names.  One is for standalone DVRs and the other is for IPTV cable set tops with DVR capabilities.  Can we merge these please?  My previous request of putting Media Center on Xbox would seem to tie into this nicely, since you’ve announced plans to do that with Mediaroom already.   4. Fix the Red Ring of Death People love the Xbox, but they really don’t love sending their consoles back every 18-24 months, when they get a bunch of red lights flashing on power up.  You’ve handled this defect about as gracefully as possible, but it’s been around for a long time now and it doesn’t seem to be fixed yet.  You can do better.  In fact, you must do better, or you insult your customers.   5. Add Blu Ray to Xbox I know, streaming movies are the future; physical media is legacy technology.  So if that’s true, why did you back HD DVD so hard?  You know why: for now, the film studios won’t allow a large selection of new release, HD, surround sound content be distributed on any medium other than Blu Ray or cable pay per view/on-demand.  Don’t you want home theater buffs to see the Xbox as a fantastic device for their rigs?  Don’t you want to put PlayStation 3 out of its misery?  And if you follow my suggestions above (move Media Center to the Xbox and fix the Red Ring problem), you’d have it all sewn up.  Do I think Blu Ray functionality will move a lot of units?  No.  Do I think that it would move more units with desperately needed influential home theater consumers?  You bet.  And you might sell more ZunePass subscriptions in the process. But while you’re at it, make the fan quieter, please.   6. Make More of Windows Home Server Home Server is a fantastic product.  And for reasons unknown to me, it seems like you’re letting it languish.  Development of the add-in ecosystem seems underfunded.  WHS’ unparalleled ease of use and reliability for home PC backup (and emergency restores) goes unsung.  Product cycles are slow.  Support for your OEMs, who are doing great work, especially in the green space with Atom CPUs, seems lacking.  You’ve married a trophy girl and you keep her cloistered at home!  That’s cruel, unusual and, um, incredibly ill-advised.  Make use of this ace card, and while you’re at it, give it real integration with Media Center.  The integration thus far proof-of-concept quality.  You should go way past that – both products will benefit immeasurably.   7. Set Up a Partner Platform for Custom Installers There’s a whole sub-industry of companies that install, integrate and configure home theater, security and connected home products.  They have an industry group. They are influential in the high-end of the consumer electronics industry, and so are their customers.  They love Media Center and they love Windows Home Server.  But I have talked to several of them at the Consumer Electronics Show and they tell me you don’t love them.  They find it very difficult to do business with Microsoft, even though they want nothing more than to sell and evangelize your platform.  This is a travesty.  Please fix it.  Get Allison Watson and the Microsoft Partner Network on board and have her hire someone who knows how to run a channel program for consumer electronics companies.  Problem solved.  Markets expanded.   8. Make Your Own Hardware In other areas, I know you love your partners.  I help run one, so I appreciate that.  But when it came to Xbox and Zune you built them it yourself (albeit on a contract basis, which is fine).  Windows Phone 7 has a chance to work as an OEM play, but it would work better if you produced the devices.  At least consider building a reference device that sells alongside your OEMs’ offerings.  That’s what Google did with the Nexxus One.  And while that phone was not itself a big seller, it catalyzed two wonderful things : (1) a quality bar was set and (2) partners exceeded it.  Before the Nexxus One, the best Android handset out there was the Motorola Droid. The Nexxus One was better, and the HTC Droid Incredible and Evo 4G are now even better than Google’s phone, which is why Verizon and Sprint decided not to carry it.  Imagine if all Windows Phone 6.x devices were on par with the HTC HD2.  I tend to believe you’d have a lot bigger market share than you do now.   9. Continue with Your Retail Initiative From what I hear, it sounds like it’s going well.  And this goes right along with making your own hardware.  When you build it, they will come.  And then it makes the likes of Best Buy and Staples do better.   10. Make an Acquisition (or Two) TiVo and/or Moxi look ripe for the picking.  With their ability to build stuff people love and your ability to run a business, you might just have something.  But do a better job than you did when you bought Danger.  Buy the ideas, not just the customers, eh?   11. Make Beautiful Stuff You’ve heard this one before, I know.  But I have some head-shrinking advice on this one.  You know that Apple obsesses over its industrial design.  You know that appeals to consumers.  But it seems you think doing so is Apple’s game exclusively and so you shouldn’t even try.  Bull dinky.  Come to New York and visit the Museum of Modern Art’s Architecture and Design gallery.  You’ll see that lots of companies and product categories have had very high design value well before Apple existed.  You can do this, and the Zune HD was a great start.  Now run with that.  Find those negative voices in your head that are telling you that you can’t and shut them up.  For good.   12. Burst the Bubble Some of the products you’ve built seem like they were conceived in a bizarro world.  That would appear to be the result of groupthink.  You must do better.  And there’s lots of people willing to advise you.  This includes just about everyone in the Regional Director program, and probably a bunch of MVPs.  Heck, I bet the guys at Engadget could help out too.  Imagine if you let them see the Kin before it shipped.  Talk to high-end gear consumers.  Talk to Best Buy and CostCo customers too.   Signing Off I hope this was of value to you.  As I wrote this I kept telling myself how obvious, even trite, some of these pieces of advice were and then, because of that, doubting they’d really help.  But I decided that they must not be obvious to Microsoft.  Sometimes when you get wrapped up in stuff, it’s hard to clear your head.  I think my head’s pretty clear here though (I’m wrapped up in other stuff), so maybe my perspective can help.  If not, well, then, I guess they all can’t be super nice.

    Read the article

  • ASP.NET MVC Postbacks and HtmlHelper Controls ignoring Model Changes

    - by Rick Strahl
    So here's a binding behavior in ASP.NET MVC that I didn't really get until today: HtmlHelpers controls (like .TextBoxFor() etc.) don't bind to model values on Postback, but rather get their value directly out of the POST buffer from ModelState. Effectively it looks like you can't change the display value of a control via model value updates on a Postback operation. To demonstrate here's an example. I have a small section in a document where I display an editable email address: This is what the form displays on a GET operation and as expected I get the email value displayed in both the textbox and plain value display below, which reflects the value in the mode. I added a plain text value to demonstrate the model value compared to what's rendered in the textbox. The relevant markup is the email address which needs to be manipulated via the model in the Controller code. Here's the Razor markup: <div class="fieldcontainer"> <label> Email: &nbsp; <small>(username and <a href="http://gravatar.com">Gravatar</a> image)</small> </label> <div> @Html.TextBoxFor( mod=> mod.User.Email, new {type="email",@class="inputfield"}) @Model.User.Email </div> </div>   So, I have this form and the user can change their email address. On postback the Post controller code then asks the business layer whether the change is allowed. If it's not I want to reset the email address back to the old value which exists in the database and was previously store. The obvious thing to do would be to modify the model. Here's the Controller logic block that deals with that:// did user change email? if (!string.IsNullOrEmpty(oldEmail) && user.Email != oldEmail) { if (userBus.DoesEmailExist(user.Email)) { userBus.ValidationErrors.Add("New email address exists already. Please…"); user.Email = oldEmail; } else // allow email change but require verification by forcing a login user.IsVerified = false; }… model.user = user; return View(model); The logic is straight forward - if the new email address is not valid because it already exists I don't want to display the new email address the user entered, but rather the old one. To do this I change the value on the model which effectively does this:model.user.Email = oldEmail; return View(model); So when I press the Save button after entering in my new email address ([email protected]) here's what comes back in the rendered view: Notice that the textbox value and the raw displayed model value are different. The TextBox displays the POST value, the raw value displays the actual model value which are different. This means that MVC renders the textbox value from the POST data rather than from the view data when an Http POST is active. Now I don't know about you but this is not the behavior I expected - initially. This behavior effectively means that I cannot modify the contents of the textbox from the Controller code if using HtmlHelpers for binding. Updating the model for display purposes in a POST has in effect - no effect. (Apr. 25, 2012 - edited the post heavily based on comments and more experimentation) What should the behavior be? After getting quite a few comments on this post I quickly realized that the behavior I described above is actually the behavior you'd want in 99% of the binding scenarios. You do want to get the POST values back into your input controls at all times, so that the data displayed on a form for the user matches what they typed. So if an error occurs, the error doesn't mysteriously disappear getting replaced either with a default value or some value that you changed on the model on your own. Makes sense. Still it is a little non-obvious because the way you create the UI elements with MVC, it certainly looks like your are binding to the model value:@Html.TextBoxFor( mod=> mod.User.Email, new {type="email",@class="inputfield",required="required" }) and so unless one understands a little bit about how the model binder works this is easy to trip up. At least it was for me. Even though I'm telling the control which model value to bind to, that model value is only used initially on GET operations. After that ModelState/POST values provide the display value. Workarounds The default behavior should be fine for 99% of binding scenarios. But if you do need fix up values based on your model rather than the default POST values, there are a number of ways that you can work around this. Initially when I ran into this, I couldn't figure out how to set the value using code and so the simplest solution to me was simply to not use the MVC Html Helper for the specific control and explicitly bind the model via HTML markup and @Razor expression: <input type="text" name="User.Email" id="User_Email" value="@Model.User.Email" /> And this produces the right result. This is easy enough to create, but feels a little out of place when using the @Html helpers for everything else. As you can see by the difference in the name and id values, you also are forced to remember the naming conventions that MVC imposes in order for ModelBinding to work properly which is a pain to remember and set manually (name is the same as the property with . syntax, id replaces dots with underlines). Use the ModelState Some of my original confusion came because I didn't understand how the model binder works. The model binder basically maintains ModelState on a postback, which holds a value and binding errors for each of the Post back value submitted on the page that can be mapped to the model. In other words there's one ModelState entry for each bound property of the model. Each ModelState entry contains a value property that holds AttemptedValue and RawValue properties. The AttemptedValue is essentially the POST value retrieved from the form. The RawValue is the value that the model holds. When MVC binds controls like @Html.TextBoxFor() or @Html.TextBox(), it always binds values on a GET operation. On a POST operation however, it'll always used the AttemptedValue to display the control. MVC binds using the ModelState on a POST operation, not the model's value. So, if you want the behavior that I was expecting originally you can actually get it by clearing the ModelState in the controller code:ModelState.Clear(); This clears out all the captured ModelState values, and effectively binds to the model. Note this will produce very similar results - in fact if there are no binding errors you see exactly the same behavior as if binding from ModelState, because the model has been updated from the ModelState already and binding to the updated values most likely produces the same values you would get with POST back values. The big difference though is that any values that couldn't bind - like say putting a string into a numeric field - will now not display back the value the user typed, but the default field value or whatever you changed the model value to. This is the behavior I was actually expecting previously. But - clearing out all values might be a bit heavy handed. You might want to fix up one or two values in a model but rarely would you want the entire model to update from the model. So, you can also clear out individual values on an as needed basis:if (userBus.DoesEmailExist(user.Email)) { userBus.ValidationErrors.Add("New email address exists already. Please…"); user.Email = oldEmail; ModelState.Remove("User.Email"); } This allows you to remove a single value from the ModelState and effectively allows you to replace that value for display from the model. Why? While researching this I came across a post from Microsoft's Brad Wilson who describes the default binding behavior best in a forum post: The reason we use the posted value for editors rather than the model value is that the model may not be able to contain the value that the user typed. Imagine in your "int" editor the user had typed "dog". You want to display an error message which says "dog is not valid", and leave "dog" in the editor field. However, your model is an int: there's no way it can store "dog". So we keep the old value. If you don't want the old values in the editor, clear out the Model State. That's where the old value is stored and pulled from the HTML helpers. There you have it. It's not the most intuitive behavior, but in hindsight this behavior does make some sense even if at first glance it looks like you should be able to update values from the model. The solution of clearing ModelState works and is a reasonable one but you have to know about some of the innards of ModelState and how it actually works to figure that out.© Rick Strahl, West Wind Technologies, 2005-2012Posted in ASP.NET  MVC   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Html Agility Pack for Reading “Real World” HTML

    - by WeigeltRo
    In an ideal world, all data you need from the web would be available via well-designed services. In the real world you sometimes have to scrape the data off a web page. Ugly, dirty – but if you really want that data, you have no choice. Just don’t write (yet another) HTML parser. I stumbled across the Html Agility Pack (HAP) a long time ago, but just now had the need for a robust way to read HTML. A quote from the website: This is an agile HTML parser that builds a read/write DOM and supports plain XPATH or XSLT (you actually don't HAVE to understand XPATH nor XSLT to use it, don't worry...). It is a .NET code library that allows you to parse "out of the web" HTML files. The parser is very tolerant with "real world" malformed HTML. The object model is very similar to what proposes System.Xml, but for HTML documents (or streams). Using the HAP was a simple matter of getting the Nuget package, taking a look at the example and dusting off some of my XPath knowledge from years ago. The documentation on the Codeplex site is non-existing, but if you’ve queried a DOM or used XPath or XSLT before you shouldn’t have problems finding your way around using Intellisense (ReSharper tip: Press Ctrl+Shift+F1 on class members for reading the full doc comments).

    Read the article

  • Recommendation for Regex editor?

    - by Tim
    I asked for recommendations for Regex editors on stackoverflow a while ago. Following is one of the replies: What is "good" depends on what is most useful to you. For me, though, these are the key features for a good regex editor (besides the ability to test and create regular expressions, of course, which is a prerequisite to be called a "regex editor" :-) : Displays matches hierarchically with captured groups. Explains/analyzes an entered regex in plain English, showing a hierarchical tree. Translates your regex into code for a language of your choice. RegexBuddy, as @Max mentioned, does all these but there is also a free alternative, Expresso that also does them very well. These two utilities are the only ones I have found with the crucial ability to explain a regex. The features sound very attractive to me. But later I found the two are for Windows. I tried to install Expresso, the free one, via Wine, but met some trouble, about which I asked in another post. So I was wondering if in Ubuntu there are some applications comparable to RegexBuddy and Expresso? If it is required to install .NET Framework in order to install Expresso, is it still worth to install Expresso on Ubuntu? Thanks and regards!

    Read the article

  • OS X Server DNS management

    - by Sorin Buturugeanu
    I have an OS X 10.6 Server running, which has PHP, Apache, MySQL, and DNS running on it. I want to take the DNS management out of the Server Admin App. I know that the DNS configuration files (the ones BIND uses) are plain text files (which have to obey some rules, obviously). The main reason for this is because I wanted to setup DKIM for one of my domains, and I had to add a TXT record to the subdomain pm._domainkey.example.com. Server Admin did not let me add that subdomain, because of the "invalid" underscore character. I searched for web based DNS management tools (the ones that I would install on my server and would allow me to manage my DNS records), but I couldn't find any good ones. (There were a couple that I managed to install, but they didn't see the configuration that I already had setup in Server Admin). Now I'm looking into editing the config files directly, but I don't know where they're located. This is a test / development server, so messing it up wouldn't be such a disaster. I know "I shouldn't do this", but I want to :). Thanks for your help.

    Read the article

  • Node.js Adventure - Node.js on Windows

    - by Shaun
    Two weeks ago I had had a talk with Wang Tao, a C# MVP in China who is currently running his startup company and product named worktile. He asked me to figure out a synchronization solution which helps his product in the future. And he preferred me implementing the service in Node.js, since his worktile is written in Node.js. Even though I have some experience in ASP.NET MVC, HTML, CSS and JavaScript, I don’t think I’m an expert of JavaScript. In fact I’m very new to it. So it scared me a bit when he asked me to use Node.js. But after about one week investigate I have to say Node.js is very easy to learn, use and deploy, even if you have very limited JavaScript skill. And I think I became love Node.js. Hence I decided to have a series named “Node.js Adventure”, where I will demonstrate my story of learning and using Node.js in Windows and Windows Azure. And this is the first one.   (Brief) Introduction of Node.js I don’t want to have a fully detailed introduction of Node.js. There are many resource on the internet we can find. But the best one is its homepage. Node.js was created by Ryan Dahl, sponsored by Joyent. It’s consist of about 80% C/C++ for core and 20% JavaScript for API. It utilizes CommonJS as the module system which we will explain later. The official definition of Node.js is Node.js is a platform built on Chrome's JavaScript runtime for easily building fast, scalable network applications. Node.js uses an event-driven, non-blocking I/O model that makes it lightweight and efficient, perfect for data-intensive real-time applications that run across distributed devices. First of all, Node.js utilizes JavaScript as its development language and runs on top of V8 engine, which is being used by Chrome. It brings JavaScript, a client-side language into the backend service world. So many people said, even though not that actually, “Node.js is a server side JavaScript”. Additionally, Node.js uses an event-driven, non-blocking IO model. This means in Node.js there’s no way to block currently working thread. Every operation in Node.js executed asynchronously. This is a huge benefit especially if our code needs IO operations such as reading disks, connect to database, consuming web service, etc.. Unlike IIS or Apache, Node.js doesn’t utilize the multi-thread model. In Node.js there’s only one working thread serves all users requests and resources response, as the ST star in the figure below. And there is a POSIX async threads pool in Node.js which contains many async threads (AT stars) for IO operations. When a user have an IO request, the ST serves it but it will not do the IO operation. Instead the ST will go to the POSIX async threads pool to pick up an AT, pass this operation to it, and then back to serve any other requests. The AT will actually do the IO operation asynchronously. Assuming before the AT complete the IO operation there is another user comes. The ST will serve this new user request, pick up another AT from the POSIX and then back. If the previous AT finished the IO operation it will take the result back and wait for the ST to serve. ST will take the response and return the AT to POSIX, and then response to the user. And if the second AT finished its job, the ST will response back to the second user in the same way. As you can see, in Node.js there’s only one thread serve clients’ requests and POSIX results. This thread looping between the users and POSIX and pass the data back and forth. The async jobs will be handled by POSIX. This is the event-driven non-blocking IO model. The performance of is model is much better than the multi-threaded blocking model. For example, Apache is built in multi-threaded blocking model while Nginx is in event-driven non-blocking mode. Below is the performance comparison between them. And below is the memory usage comparison between them. These charts are captured from the video NodeJS Basics: An Introductory Training, which presented at Cloud Foundry Developer Advocate.   Node.js on Windows To execute Node.js application on windows is very simple. First of you we need to download the latest Node.js platform from its website. After installed, it will register its folder into system path variant so that we can execute Node.js at anywhere. To confirm the Node.js installation, just open up a command windows and type “node”, then it will show the Node.js console. As you can see this is a JavaScript interactive console. We can type some simple JavaScript code and command here. To run a Node.js JavaScript application, just specify the source code file name as the argument of the “node” command. For example, let’s create a Node.js source code file named “helloworld.js”. Then copy a sample code from Node.js website. 1: var http = require("http"); 2:  3: http.createServer(function (req, res) { 4: res.writeHead(200, {"Content-Type": "text/plain"}); 5: res.end("Hello World\n"); 6: }).listen(1337, "127.0.0.1"); 7:  8: console.log("Server running at http://127.0.0.1:1337/"); This code will create a web server, listening on 1337 port and return “Hello World” when any requests come. Run it in the command windows. Then open a browser and navigate to http://localhost:1337/. As you can see, when using Node.js we are not creating a web application. In fact we are likely creating a web server. We need to deal with request, response and the related headers, status code, etc.. And this is one of the benefit of using Node.js, lightweight and straightforward. But creating a website from scratch again and again is not acceptable. The good news is that, Node.js utilizes CommonJS as its module system, so that we can leverage some modules to simplify our job. And furthermore, there are about ten thousand of modules available n the internet, which covers almost all areas in server side application development.   NPM and Node.js Modules Node.js utilizes CommonJS as its module system. A module is a set of JavaScript files. In Node.js if we have an entry file named “index.js”, then all modules it needs will be located at the “node_modules” folder. And in the “index.js” we can import modules by specifying the module name. For example, in the code we’ve just created, we imported a module named “http”, which is a build-in module installed alone with Node.js. So that we can use the code in this “http” module. Besides the build-in modules there are many modules available at the NPM website. Thousands of developers are contributing and downloading modules at this website. Hence this is another benefit of using Node.js. There are many modules we can use, and the numbers of modules increased very fast, and also we can publish our modules to the community. When I wrote this post, there are totally 14,608 modules at NPN and about 10 thousand downloads per day. Install a module is very simple. Let’s back to our command windows and input the command “npm install express”. This command will install a module named “express”, which is a MVC framework on top of Node.js. And let’s create another JavaScript file named “helloweb.js” and copy the code below in it. I imported the “express” module. And then when the user browse the home page it will response a text. If the incoming URL matches “/Echo/:value” which the “value” is what the user specified, it will pass it back with the current date time in JSON format. And finally my website was listening at 12345 port. 1: var express = require("express"); 2: var app = express(); 3:  4: app.get("/", function(req, res) { 5: res.send("Hello Node.js and Express."); 6: }); 7:  8: app.get("/Echo/:value", function(req, res) { 9: var value = req.params.value; 10: res.json({ 11: "Value" : value, 12: "Time" : new Date() 13: }); 14: }); 15:  16: console.log("Web application opened."); 17: app.listen(12345); For more information and API about the “express”, please have a look here. Start our application from the command window by command “node helloweb.js”, and then navigate to the home page we can see the response in the browser. And if we go to, for example http://localhost:12345/Echo/Hello Shaun, we can see the JSON result. The “express” module is very populate in NPM. It makes the job simple when we need to build a MVC website. There are many modules very useful in NPM. - underscore: A utility module covers many common functionalities such as for each, map, reduce, select, etc.. - request: A very simple HTT request client. - async: Library for coordinate async operations. - wind: Library which enable us to control flow with plain JavaScript for asynchronous programming (and more) without additional pre-compiling steps.   Node.js and IIS I demonstrated how to run the Node.js application from console. Since we are in Windows another common requirement would be, “can I host Node.js in IIS?” The answer is “Yes”. Tomasz Janczuk created a project IISNode at his GitHub space we can find here. And Scott Hanselman had published a blog post introduced about it.   Summary In this post I provided a very brief introduction of Node.js, includes it official definition, architecture and how it implement the event-driven non-blocking model. And then I described how to install and run a Node.js application on windows console. I also described the Node.js module system and NPM command. At the end I referred some links about IISNode, an IIS extension that allows Node.js application runs on IIS. Node.js became a very popular server side application platform especially in this year. By leveraging its non-blocking IO model and async feature it’s very useful for us to build a highly scalable, asynchronously service. I think Node.js will be used widely in the cloud application development in the near future.   In the next post I will explain how to use SQL Server from Node.js.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

< Previous Page | 45 46 47 48 49 50 51 52 53 54 55 56  | Next Page >