Search Results

Search found 15061 results on 603 pages for 'modified date'.

Page 170/603 | < Previous Page | 166 167 168 169 170 171 172 173 174 175 176 177  | Next Page >

  • Burg Custom Icons work only with specific themes

    - by el10780
    I have made a custom icon for burg loader for my Lenovo Recovery Partition.I have made 3 icons : large_qdrive.png (128 X 128 pixels) small_qdrive.png (24 X 24 pixels) grey_qdrive.png (128 x 128 pixels) The .png icons that I created I made them using gimp from a qdrive.ico file that I found in the Lenovo Recovery Partition. I transferred the icons to the /boot/burg/themes/icons folder and I added to the class list of the grey,large,small and the hover files the following lines : -qdrive { image = "$$/large_qdrive.png" } in the large file -qdrive { image = "$$/small_qdrive.png" } in the small file -qdrive { image = "$$/grey_qdrive.png" } in the grey file -qdrive { image = "$$/grey_qdrive.png:$$/large_qdrive.png" } in the hover file I ran sudo update-burg and after that I modified the following line in the burg.cfg file : menuentry "Windows 7 (loader) (on /dev/sda2)" --class windows --class os { to menuentry "Windows 7 (loader) (on /dev/sda2)" --class qdrive --class os { and I also tried to change the title for the Lenovo Recovery Partition,so I tried this as well: menuentry "Lenovo Recovery Partition (on /dev/sda2)" --class qdrive --class os { None of this tries enforced actually burg loader to use the custom icon that I made and I can't figure out why. I have to mention also that there are a few themes that I have installed in burg which actually are able to use the small_qdrive.png icon that I made,but all the others which use either the large_qdrive.png or the grey_qdrive.png weren't able to use the custom icons. I have double checked for typos in all the files that I have created or I modified,so I am pretty sure that I haven't misspelled anything. I have checked also the title of the custom icons that I made and neither of them have a typo. I have looked also if there are any other folder that the themes might use to retrieve the icons,but it seems that all of them except for **Fortune** theme,which I downloaded from OMG!UBUNTU,use the icons folder which is located in /boot/burg/themes/icons I tried to add the custom icons to the icons folder of the theme **Fortune**,but still nothing happened.

    Read the article

  • Configuring SQL Server Management Studio to use Windows Integrated Authentication &hellip; from non-

    - by Enrique Lima
    Did you know you can pass your Windows credentials to SQL Server even when working from a workstation that is not joined to a domain? Here is how … From Start, then click All Programs, find Microsoft SQL Server (version 2005 or 2008). Once there, do a right-click on SQL Server Management Studio, then click on Properties Now, follow below to modify the entry for Target: Now the real task (we will be using the runas command) … Modify the shortcut’s target as follows, and remember to replace <domain\user> with the values that correspond to your environment : x64 SQL Server 2008 C:\Windows\System32\runas.exe /user:<domain\user> /netonly "C:\Program Files (x86)\Microsoft SQL Server\100\Tools\binn\VSShell\Common7\IDE\Ssms.exe -nosplash" SQL Server 2005 C:\Windows\System32\runas.exe /user:<domain\user> /netonly "C:\Program Files (x86)\Microsoft SQL Server\90\Tools\binn\VSShell\Common7\IDE\SqlWb.exe -nosplash" x86 SQL Server 2008 C:\Windows\System32\runas.exe /user:<domain\user> /netonly "C:\Program Files\Microsoft SQL Server\100\Tools\binn\VSShell\Common7\IDE\Ssms.exe -nosplash" SQL Server 2005 C:\Windows\System32\runas.exe /user:<domain\user> /netonly "C:\Program Files\Microsoft SQL Server\90\Tools\binn\VSShell\Common7\IDE\SqlWb.exe -nosplash" Since we modified the shortcut, we will need to fix the icon for SSMS.  We will fix it by pressing the Change Icon… button and pointing to the original “icon” providers. It is the executables for SSMS that hold the icon information, so we need to point to … x64 SQL Server 2008 %ProgramFiles% (x86)\Microsoft SQL Server\100\Tools\Binn\VSShell\Common7\IDE\Ssms.exe SQL Server 2005 C:\Program Files (x86)\Microsoft SQL Server\90\Tools\binn\VSShell\Common7\IDE\SqlWb.exe x86 SQL Server 2008 %ProgramFiles%\Microsoft SQL Server\100\Tools\Binn\VSShell\Common7\IDE\Ssms.exe SQL Server 2005 C:\Program Files\Microsoft SQL Server\90\Tools\binn\VSShell\Common7\IDE\SqlWb.exe When you start SSMS from a modified shortcut, you’ll be prompted for your domain password: SSMS will show up stating a different account in the username box, but the parameters from the configuration you are doing above do work and will pass on correctly.

    Read the article

  • I created a program based on an LGPL project, and I'm not allowed to publish the source code

    - by Dave
    I thought LGPL was a permissive license, just like MIT, BSD or Apache. But today I read, that only linking to LGPL (libraries etc) is allowed from closed-source code - other than that, it's copyleft - so I have to publish code that is based on an LGPL program. I created a program for my employer that is based on an LGPL program, but has considerable modifications to it. Of course, I am not allowed to put that modified source code out there. At the same time, I have to, if I distribute it (right?). So I wonder whether there is a workaround to this, so I can keep this closed-source (I wish I could publish the source) - any suggestions? My idea: can I put most functions of the original LGPL app into an external library, write the core executable from scratch, but refer back to the library for all functions that I haven't modified? Currently, everything is in a .jar file (it's Java/Swing). if you think my idea is legally/technically feasible - how much effort would it be to seperate what I wrote and what the original is? I'm not the most java savvy.

    Read the article

  • What are the hard and fast rules for Cache Control?

    - by Metalshark
    Confession: sites I maintain have different rules for Cache Control mostly based on the default configuration of the server followed up with recommendations from the Page Speed & Y-Slow Firefox plug-ins and the Network Resources view in Google's Speed Tracer. Cache-Control is set to private/public depending on what they say to do, ETag's/Last-Modified headers are only tinkered with if Y-Slow suggests there is something wrong and Vary-Accept-Encoding seems necessary when manually gziping files for Amazon CloudFront. When reading through the material on the different options and what they do there seems to be conflicting information, rules for broken proxies and cargo cult configurations. Any of the official information provided by the analysis tools mentioned above is quite inaccessible as it deals with each topic individually instead of as a unified strategy (so there is no cross-referencing of techniques). For example, it seems to make no sense that the speed analysis tools rate a site with ETag's the same as a site without them if they are meant to help with caching. What are the hard and fast rules for a platform agnostic Cache Control strategy? EDIT: A link through Jeff Atwood's article explains Caching in superb depth. For the record though here are the hard and fast rules: If the file is Compressed using GZIP, etc - use "cache-control: private" as a proxy may return the compressed version to a client that does not support it (the browser cache will hold files marked this way though). Also remember to include a "Vary: Accept-Encoding" to say that it is compressible. Use Last-Modified in conjunction with ETag - belt and braces usage provides both validators, whilst ETag is based on file contents instead of modification time alone, using both covers all bases. NOTE: AOL's PageTest has a carte blanche approach against ETags for some reason. If you are using Apache on more than one server to host the same content then remove the implicitly declared inode from ETags by excluding it from the FileETag directive (i.e. "FileETag MTime Size") unless you are genuinely using the same live filesystem. Use "cache-control: public" wherever you can - this means that proxy servers (and the browser cache) will return your content even if the rest of the page needs HTTP authentication, etc.

    Read the article

  • Adventures in Lab Management Configuration: CMMI Edition Part 1 of 3

    - by Enrique Lima
    I remember at one point someone telling me how close Migrate was to Migraine. This was a process that included an environment from TFS 2008 to TFS 2010, needed to be migrated too as far as the process template goes.  Here we are talking about CMMI v4.2 to CMMI v5.0.  Now, the process to migrate the TFS Infrastructure is one thing, migrating the Process Template is a different deal, not hard … just involved. Followed a combination of steps that came from a blog post as the main guidance and then MSDN (as suggested on the guidance post) to complement some tasks and steps. Again, the focus I have here is CMMI. The high level steps taken to enable the TFS 2008 CMMI v4.2 migrated to TFS 2010 Process Template are: 1)  Backup the Collection, Configuration and Warehouse Databases. 2)  Downloaded the Process Template using Visual Studio 2010. 3) Exported, modified and imported Bug Type Definition 4) Exported, modified and imported Scenario or Requirement Type Definition. 5) Created and imported bug field mappings. Now, we can attempt to connect using Test Manager, and you should be able to get this going. After that was done, it was time to enroll VMs that already existed in the environment.  This was a bit more challenging, but in the end it was a matter of just analyzing the changes that had been made to had a temporary work around from the time we migrated to the time we converted the Work Items and such and added fields to enable communication between the project and the Test and Lab Manager component. There are 2 more parts to this post, the second will describe the detailed steps taken to complete the Process Template update and the third will talk about the gotchas and fixes for the Lab Management portion.

    Read the article

  • How are objects modelled in a functional programming language?

    - by Giorgio
    In an answer to this question (written by Pete) there are some considerations about OOP versus FP. In particular, it is suggested that FP languages are not very suitable for modelling (persistent) objects that have an identity and a mutable state. I was wondering if this is true or, in other words, how one would model objects in a functional programming language. From my basic knowledge of Haskell I thought that one could use monads in some way, but I really do not know enough on this topic to come up with a clear answer. So, how are entities with an identity and a mutable persistent state normally modelled in a functional language? EDIT Here are some further details to clarify what I have in mind. Take a typical Java application in which I can (1) read a record from a database table into a Java object, (2) modify the object in different ways, (3) save the modified object to the database. How would this be implemented e.g. in Haskell? I would initially read the record into a record value (defined by a data definition), perform different transformations by applying functions to this initial value (each intermediate value is a new, modified copy of the original record) and then write the final record value to the database. Is this all there is to it? How can I ensure that at each moment in time only one copy of the record is valid / accessible? One does not want to have different immutable values representing different snapshots of the same object to be accessible at the same time.

    Read the article

  • In an online questionnaire, what is a best way to design a database for keeping track of users all attempts?

    - by user1990525
    We have a web app where users can take online exams. Exam admin will create a questionnaire. A questionnaire can have many Questions. Each question is a multiple choice question (MCQ). Lets say an admin creates a questionnaire with 10 questions. Users attempt those questions. Now, unlike real exams users can attempt single questionnaire multiple times. And we have to keep track of his all attempts. e.g. User_id Questionnaire_id question_id answer attempt_date attempt_no 1 1 1 a 1 June 2013 1 1 1 2 b 1 June 2013 1 1 1 1 c 2 June 2013 2 1 1 2 d 2 June 2013 2 Now it can also happen that after user has attempted same questionnare twice, admin can delete a question from same questionnaire, but users attempt history should still have reference to that so that user can see his that question in his attempt history in spite of admin deleting that question. If user now attempts this changed questionnaire he should see only 1 question. User_id Questionnaire_id question_id answer attempt_date attempt_no 1 1 1 a 3 June 2013 3 Also, after this user modified some part of question, users attempt history should show question before modification while any new attempt should show modified question. How do we manage this at the database level? My first gut feeling was that, For deletes, do not do physical delete, just make a question inactive so that history can still keep track of users attempt. For modifications, create versions for questions and each new attempt refres to latest version of each question and history keeping reference to version of question at attempt time.

    Read the article

  • What is a suitable way to correct this program? [closed]

    - by lamwaiman1988
    Possible Duplicate: How to deal with huge changes to a data specification? As described in this question, I need to modified a 4800 line c program to fulfill the new functional specification. However I have no idea of HOW to do it. Background: There is a updated version of a header file which the program depends on. I've speculated the changes and found that at least 30 critical changes are present. With the new header file, the c program cannot even compile. The error is too much that the compile said "too much error" and don't list every one. Unlike normal modification which can be easily test, this has no way to test without overcoming the 30 critical changes. For example, there would be no point to run the modified program with the old header file. On the other hand, with the new header file, the program can't run until I've covered every one of the 30 critical changes scattered in the new header file so that I can't even verify the correctness of each modification until there is no more compilation error. Do you have any idea to deal with this situation?

    Read the article

  • Windows XP - Security Update for Windows XP (KB923561) (KB946648) (KB956572) (KB958644)

    - by leeand00
    My father's computer has Windows XP, but when I try to install the service packs it always fails. What gives? Here are the errors that I get in the event log: Date: 2/6/2010 Time: 12:02:18 AM Type: Error User: N/A Computer: EVO Source: Windows Update Agent Category: Installation Event ID: 20 Installation Failure: Windows failed to install the following update with error 0x80070002: Security Update for Windows XP (KB946648). For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp. 0000: 57 69 6e 33 32 48 52 65 Win32HRe 0008: 73 75 6c 74 3d 30 78 38 sult=0x8 0010: 30 30 37 30 30 30 32 20 0070002 0018: 55 70 64 61 74 65 49 44 UpdateID 0020: 3d 7b 38 33 44 31 41 44 ={83D1AD 0028: 46 35 2d 37 37 39 44 2d F5-779D- 0030: 34 30 31 36 2d 38 43 33 4016-8C3 0038: 31 2d 35 34 39 32 37 30 1-549270 0040: 46 36 37 42 33 46 7d 20 F67B3F} 0048: 52 65 76 69 73 69 6f 6e Revision 0050: 4e 75 6d 62 65 72 3d 31 Number=1 0058: 30 34 20 00 04 . Date: 2/6/2010 Time: 12:02:18 AM Type: Error User: N/A Computer: EVO Source: Windows Update Agent Catagory: Installation Event ID: 20 Installation Failure: Windows failed to install the following update with error 0x80070002: Security Update for Windows XP (KB956572). For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp. 0000: 57 69 6e 33 32 48 52 65 Win32HRe 0008: 73 75 6c 74 3d 30 78 38 sult=0x8 0010: 30 30 37 30 30 30 32 20 0070002 0018: 55 70 64 61 74 65 49 44 UpdateID 0020: 3d 7b 44 46 32 46 30 41 ={DF2F0A 0028: 39 38 2d 36 45 33 35 2d 98-6E35- 0030: 34 33 37 39 2d 41 42 33 4379-AB3 0038: 33 2d 41 30 33 30 33 45 3-A0303E 0040: 46 37 34 42 32 41 7d 20 F74B2A} 0048: 52 65 76 69 73 69 6f 6e Revision 0050: 4e 75 6d 62 65 72 3d 31 Number=1 0058: 30 32 20 00 02 . Date: 2/6/2010 Time: 12:02:18 AM Type: Error User: N/A Computer EVO Source: Windows Update Agent Event ID: 20 Installation Failure: Windows failed to install the following update with error 0x80070002: Security Update for Windows XP (KB958644). For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp. 0000: 57 69 6e 33 32 48 52 65 Win32HRe 0008: 73 75 6c 74 3d 30 78 38 sult=0x8 0010: 30 30 37 30 30 30 32 20 0070002 0018: 55 70 64 61 74 65 49 44 UpdateID 0020: 3d 7b 39 33 39 37 41 32 ={9397A2 0028: 31 46 2d 32 34 36 43 2d 1F-246C- 0030: 34 35 33 42 2d 41 43 30 453B-AC0 0038: 35 2d 36 35 42 46 34 46 5-65BF4F 0040: 43 36 42 36 38 42 7d 20 C6B68B} 0048: 52 65 76 69 73 69 6f 6e Revision 0050: 4e 75 6d 62 65 72 3d 31 Number=1 0058: 30 31 20 00 01 . Date: 2/6/2010 Time: 12:02:18 AM Type: Error User: N/A Computer: EVO Source: Windows Update Agent Category: Installation Event ID: 20 Installation Failure: Windows failed to install the following update with error 0x80070002: Security Update for Windows XP (KB923561). For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp. 0000: 57 69 6e 33 32 48 52 65 Win32HRe 0008: 73 75 6c 74 3d 30 78 38 sult=0x8 0010: 30 30 37 30 30 30 32 20 0070002 0018: 55 70 64 61 74 65 49 44 UpdateID 0020: 3d 7b 33 31 30 41 34 43 ={310A4C 0028: 30 38 2d 35 39 33 44 2d 08-593D- 0030: 34 31 41 33 2d 42 42 35 41A3-BB5 0038: 37 2d 38 33 42 33 38 36 7-83B386 0040: 44 37 37 33 42 35 7d 20 D773B5} 0048: 52 65 76 69 73 69 6f 6e Revision 0050: 4e 75 6d 62 65 72 3d 31 Number=1 0058: 30 33 20 00 03 . Thank you, Andrew

    Read the article

  • Can't logging in file from tomcat6 with log4j

    - by Ivan Nakov
    I have one stupid problem, which is killing me from hours. I'm trying to configure loggin to my project. I started with a simple Spring MVC project generated by STS, then added org.apache.log4j.RollingFileAppender to the existing log4j.xml file. <?xml version="1.0" encoding="UTF-8"?> <!-- Appenders --> <appender name="console" class="org.apache.log4j.ConsoleAppender"> <param name="Target" value="System.out" /> <layout class="org.apache.log4j.PatternLayout"> <param name="ConversionPattern" value="%-5p: %c - %m%n" /> </layout> </appender> <appender name="FilleAppender" class="org.apache.log4j.RollingFileAppender"> <param name="maxFileSize" value="100KB" /> <param name="maxBackupIndex" value="2" /> <param name="File" value="/home/ivan/Desktop/app.log" /> <layout class="org.apache.log4j.PatternLayout"> <param name="ConversionPattern" value="%d{ABSOLUTE} %5p %c{1}: %m%n " /> </layout> </appender> <!-- Application Loggers --> <logger name="org.elsys.logger"> <level value="debug" /> </logger> <!-- 3rdparty Loggers --> <logger name="org.springframework.core"> <level value="info" /> </logger> <logger name="org.springframework.beans"> <level value="info" /> </logger> <logger name="org.springframework.context"> <level value="info" /> </logger> <logger name="org.springframework.web"> <level value="info" /> </logger> <!-- Root Logger --> <root> <priority value="debug" /> <appender-ref ref="FilleAppender" /> </root> When I deploy project to tomcat6 server and open the url, logger doesn't generate log file. I'm trying to log from this controller: @Controller public class HomeController { private static final Logger logger = LoggerFactory.getLogger(HomeController.class); /** * Simply selects the home view to render by returning its name. */ @RequestMapping(value = "/", method = RequestMethod.GET) public String home(Locale locale, Model model) { logger.info("Welcome home! the client locale is "+ locale.toString()); Date date = new Date(); DateFormat dateFormat = DateFormat.getDateTimeInstance(DateFormat.LONG, DateFormat.LONG, locale); String formattedDate = dateFormat.format(date); logger.debug("send view"); model.addAttribute("serverTime", formattedDate ); return "home"; } } When I log from this simple Main.class, it works correct. public class Main { public static void main(String[] args) { Logger log = LoggerFactory.getLogger(Main.class); log.debug("Test"); } } I'm using tomcat6 and Ubuntu 11.10. I made a research in net and i found various options to fix this problem, but they don't help me. Please if someone have ideas how to fix it, help me.

    Read the article

  • Nginx error page with JSON response

    - by Waseem
    I'm trying to serve a maintenance page to clients making request to my application when it is under maintenance. Following is my nginx configuration for that purpose. server { recursive_error_pages on; listen 80; ... if (-f $document_root/maintenance.html) { return 503; } error_page 404 /404.html; error_page 500 502 504 /500.html; error_page 503 @503; location = /404.html { root $document_root; } location = /500.html { root $document_root; } location @503 { error_page 405 =/maintenance.html; if (-f $request_filename) { break; } rewrite ^(.*)$ /maintenance.html break; } } Lets say I have enabled maintenance of my site by creating a $document_root/maintenance.html. This file, correctly, is served when a user makes a request with with Accept header of text/html. $ curl http://server.com/ -i -v -X GET -H "Accept: text/html" * Adding handle: conn: 0xf89420 * Adding handle: send: 0 * Adding handle: recv: 0 * Curl_addHandleToPipeline: length: 1 * - Conn 0 (0xf89420) send_pipe: 1, recv_pipe: 0 * About to connect() to server.com port 80 (#0) * Trying xxx.xxx.xxx.xxx... * Connected to server.com (xxx.xxx.xxx.xxx) port 80 (#0) > GET / HTTP/1.1 > User-Agent: curl/7.33.0 > Host: server.com > Accept: text/html > < HTTP/1.1 503 Service Temporarily Unavailable HTTP/1.1 503 Service Temporarily Unavailable * Server nginx/1.1.19 is not blacklisted < Server: nginx/1.1.19 Server: nginx/1.1.19 < Date: Thu, 14 Nov 2013 11:16:16 GMT Date: Thu, 14 Nov 2013 11:16:16 GMT < Content-Type: text/html Content-Type: text/html < Content-Length: 27 Content-Length: 27 < Connection: keep-alive Connection: keep-alive < This is under maintenance. * Connection #0 to host server.com left intact Now some clients set Accept header to application/json. How do I send them a JSON response instead of maintenance.html? Following is the response that I get when setting Accept to application/json. $ curl http://server.com/ -i -v -X GET -H "Accept: application/json" * Adding handle: conn: 0x190c430 * Adding handle: send: 0 * Adding handle: recv: 0 * Curl_addHandleToPipeline: length: 1 * - Conn 0 (0x190c430) send_pipe: 1, recv_pipe: 0 * About to connect() to server.com port 80 (#0) * Trying xxx.xxx.xxx.xxx... * Connected to server.com (xxx.xxx.xxx.xxx) port 80 (#0) > GET / HTTP/1.1 > User-Agent: curl/7.33.0 > Host: server.com > Accept: application/json > < HTTP/1.1 503 Service Temporarily Unavailable HTTP/1.1 503 Service Temporarily Unavailable * Server nginx/1.1.19 is not blacklisted < Server: nginx/1.1.19 Server: nginx/1.1.19 < Date: Thu, 14 Nov 2013 11:15:50 GMT Date: Thu, 14 Nov 2013 11:15:50 GMT < Content-Type: text/html Content-Type: text/html < Content-Length: 27 Content-Length: 27 < Connection: keep-alive Connection: keep-alive < This is under maintenance. * Connection #0 to host server.com left intact

    Read the article

  • Exchange 2003 mail non-delivery (NDR), spam activity? events 7002 & 7004

    - by HighTechGeek
    Windows Server 2003 Small Business Server SP2 Exchange Version 6.5 (Build 7638.2: Service Pack 2) This network has been neglected and has been having email problems for years and was on many blacklists. I was called in after the server eventually crashed... I got the server back up and running, but email problems persist. Outgoing mail delivery is sporadic. Sometimes the mail goes through, sometimes a delayed delivery report is generated after a day or more, and sometimes it seems to go through, but the recipient never receives it. Not sure if spammers are successfully using the server as a relay (see event entries below after turning on maximum SMTP logging)... User PCs infected with viruses and server was blacklisted on many sites (I used mxtoolbox.com) I have cleaned all the PCs and changed all passwords (including administrator) I have requested removal from all of the blacklists - most have removed the listing, some take more time. I have setup rDNS pointer records with the ISP (Comcast) - that was one reason for some of the blacklistings. I have tested that it's not an open relay using telnet as described here: www.amset.info/exchange/smtp-openrelay.asp I followed the advise of a Spamhaus & Microsoft article to enable maximum SMTP logging. http://www.spamhaus.org/faq/answers.lasso?section=isp%20spam%20issues#320 which directed me to Microsoft KB article 895853, specifically, the part 2/3 down titled: "If mail relay occurs from an account on an Exchange computer that is not configured as an open relay" . The Application Event Log is filling with this type of activity (Event ID 7002, 7002 & 3018 errors): Event Type: Error Event Source: MSExchangeTransport Event Category: SMTP Protocol Event ID: 7004 Date: 1/18/2011 Time: 7:33:29 AM User: N/A Computer: SERVER Description: This is an SMTP protocol error log for virtual server ID 1, connection #621. The remote host "212.52.84.180", responded to the SMTP command "rcpt" with "550 #5.1.0 Address rejected [email protected] ". The full command sent was "RCPT TO: ". This will probably cause the connection to fail. and this: Event Type: Warning Event Source: MSExchangeTransport Event Category: SMTP Protocol Event ID: 7002 Date: 1/18/2011 Time: 7:33:29 AM User: N/A Computer: SERVER Description: This is an SMTP protocol warning log for virtual server ID 1, connection #620. The remote host "212.52.84.170", responded to the SMTP command "rcpt" with "452 Too many recipients received this hour ". The full command sent was "RCPT TO: ". This may cause the connection to fail. or a variant of: Event Type: Warning Event Source: MSExchangeTransport Event Category: SMTP Protocol Event ID: 7002 Date: 1/18/2011 Time: 8:39:21 AM User: N/A Computer: SERVER Description: This is an SMTP protocol warning log for virtual server ID 1, connection #661. The remote host "82.57.200.133", responded to the SMTP command "rcpt" with "421 Service not available - too busy ". The full command sent was "RCPT TO: ". This may cause the connection to fail. also Event Type: Error Event Source: MSExchangeTransport Event Category: NDR Event ID: 3018 Date: 1/18/2011 Time: 9:49:37 AM User: N/A Computer: SERVER Description: A non-delivery report with a status code of 5.4.0 was generated for recipient rfc822;[email protected] (Message-ID ). Causes: This message indicates a DNS problem or an IP address configuration problem Solution: Check the DNS using nslookup or dnsq. Verify the IP address is in IPv4 literal format. Data: 0000: ef 02 04 c0 ï..À Any guidance and/or suggestions and/or tests to perform would be greatly appreciated.

    Read the article

  • Passing multiple POST parameters to Web API Controller Methods

    - by Rick Strahl
    ASP.NET Web API introduces a new API for creating REST APIs and making AJAX callbacks to the server. This new API provides a host of new great functionality that unifies many of the features of many of the various AJAX/REST APIs that Microsoft created before it - ASP.NET AJAX, WCF REST specifically - and combines them into a whole more consistent API. Web API addresses many of the concerns that developers had with these older APIs, namely that it was very difficult to build consistent REST style resource APIs easily. While Web API provides many new features and makes many scenarios much easier, a lot of the focus has been on making it easier to build REST compliant APIs that are focused on resource based solutions and HTTP verbs. But  RPC style calls that are common with AJAX callbacks in Web applications, have gotten a lot less focus and there are a few scenarios that are not that obvious, especially if you're expecting Web API to provide functionality similar to ASP.NET AJAX style AJAX callbacks. RPC vs. 'Proper' REST RPC style HTTP calls mimic calling a method with parameters and returning a result. Rather than mapping explicit server side resources or 'nouns' RPC calls tend simply map a server side operation, passing in parameters and receiving a typed result where parameters and result values are marshaled over HTTP. Typically RPC calls - like SOAP calls - tend to always be POST operations rather than following HTTP conventions and using the GET/POST/PUT/DELETE etc. verbs to implicitly determine what operation needs to be fired. RPC might not be considered 'cool' anymore, but for typical private AJAX backend operations of a Web site I'd wager that a large percentage of use cases of Web API will fall towards RPC style calls rather than 'proper' REST style APIs. Web applications that have needs for things like live validation against data, filling data based on user inputs, handling small UI updates often don't lend themselves very well to limited HTTP verb usage. It might not be what the cool kids do, but I don't see RPC calls getting replaced by proper REST APIs any time soon.  Proper REST has its place - for 'real' API scenarios that manage and publish/share resources, but for more transactional operations RPC seems a better choice and much easier to implement than trying to shoehorn a boatload of endpoint methods into a few HTTP verbs. In any case Web API does a good job of providing both RPC abstraction as well as the HTTP Verb/REST abstraction. RPC works well out of the box, but there are some differences especially if you're coming from ASP.NET AJAX service or WCF Rest when it comes to multiple parameters. Action Routing for RPC Style Calls If you've looked at Web API demos you've probably seen a bunch of examples of how to create HTTP Verb based routing endpoints. Verb based routing essentially maps a controller and then uses HTTP verbs to map the methods that are called in response to HTTP requests. This works great for resource APIs but doesn't work so well when you have many operational methods in a single controller. HTTP Verb routing is limited to the few HTTP verbs available (plus separate method signatures) and - worse than that - you can't easily extend the controller with custom routes or action routing beyond that. Thankfully Web API also supports Action based routing which allows you create RPC style endpoints fairly easily:RouteTable.Routes.MapHttpRoute( name: "AlbumRpcApiAction", routeTemplate: "albums/{action}/{title}", defaults: new { title = RouteParameter.Optional, controller = "AlbumApi", action = "GetAblums" } ); This uses traditional MVC style {action} method routing which is different from the HTTP verb based routing you might have read a bunch about in conjunction with Web API. Action based routing like above lets you specify an end point method in a Web API controller either via the {action} parameter in the route string or via a default value for custom routes. Using routing you can pass multiple parameters either on the route itself or pass parameters on the query string, via ModelBinding or content value binding. For most common scenarios this actually works very well. As long as you are passing either a single complex type via a POST operation, or multiple simple types via query string or POST buffer, there's no issue. But if you need to pass multiple parameters as was easily done with WCF REST or ASP.NET AJAX things are not so obvious. Web API has no issue allowing for single parameter like this:[HttpPost] public string PostAlbum(Album album) { return String.Format("{0} {1:d}", album.AlbumName, album.Entered); } There are actually two ways to call this endpoint: albums/PostAlbum Using the Model Binder with plain POST values In this mechanism you're sending plain urlencoded POST values to the server which the ModelBinder then maps the parameter. Each property value is matched to each matching POST value. This works similar to the way that MVC's  ModelBinder works. Here's how you can POST using the ModelBinder and jQuery:$.ajax( { url: "albums/PostAlbum", type: "POST", data: { AlbumName: "Dirty Deeds", Entered: "5/1/2012" }, success: function (result) { alert(result); }, error: function (xhr, status, p3, p4) { var err = "Error " + " " + status + " " + p3; if (xhr.responseText && xhr.responseText[0] == "{") err = JSON.parse(xhr.responseText).message; alert(err); } }); Here's what the POST data looks like for this request: The model binder and it's straight form based POST mechanism is great for posting data directly from HTML pages to model objects. It avoids having to do manual conversions for many operations and is a great boon for AJAX callback requests. Using Web API JSON Formatter The other option is to post data using a JSON string. The process for this is similar except that you create a JavaScript object and serialize it to JSON first.album = { AlbumName: "PowerAge", Entered: new Date(1977,0,1) } $.ajax( { url: "albums/PostAlbum", type: "POST", contentType: "application/json", data: JSON.stringify(album), success: function (result) { alert(result); } }); Here the data is sent using a JSON object rather than form data and the data is JSON encoded over the wire. The trace reveals that the data is sent using plain JSON (Source above), which is a little more efficient since there's no UrlEncoding that occurs. BTW, notice that WebAPI automatically deals with the date. I provided the date as a plain string, rather than a JavaScript date value and the Formatter and ModelBinder both automatically map the date propertly to the Entered DateTime property of the Album object. Passing multiple Parameters to a Web API Controller Single parameters work fine in either of these RPC scenarios and that's to be expected. ModelBinding always works against a single object because it maps a model. But what happens when you want to pass multiple parameters? Consider an API Controller method that has a signature like the following:[HttpPost] public string PostAlbum(Album album, string userToken) Here I'm asking to pass two objects to an RPC method. Is that possible? This used to be fairly straight forward either with WCF REST and ASP.NET AJAX ASMX services, but as far as I can tell this is not directly possible using a POST operation with WebAPI. There a few workarounds that you can use to make this work: Use both POST *and* QueryString Parameters in Conjunction If you have both complex and simple parameters, you can pass simple parameters on the query string. The above would actually work with: /album/PostAlbum?userToken=sekkritt but that's not always possible. In this example it might not be a good idea to pass a user token on the query string though. It also won't work if you need to pass multiple complex objects, since query string values do not support complex type mapping. They only work with simple types. Use a single Object that wraps the two Parameters If you go by service based architecture guidelines every service method should always pass and return a single value only. The input should wrap potentially multiple input parameters and the output should convey status as well as provide the result value. You typically have a xxxRequest and a xxxResponse class that wraps the inputs and outputs. Here's what this method might look like:public PostAlbumResponse PostAlbum(PostAlbumRequest request) { var album = request.Album; var userToken = request.UserToken; return new PostAlbumResponse() { IsSuccess = true, Result = String.Format("{0} {1:d} {2}", album.AlbumName, album.Entered,userToken) }; } with these support types:public class PostAlbumRequest { public Album Album { get; set; } public User User { get; set; } public string UserToken { get; set; } } public class PostAlbumResponse { public string Result { get; set; } public bool IsSuccess { get; set; } public string ErrorMessage { get; set; } }   To call this method you now have to assemble these objects on the client and send it up as JSON:var album = { AlbumName: "PowerAge", Entered: "1/1/1977" } var user = { Name: "Rick" } var userToken = "sekkritt"; $.ajax( { url: "samples/PostAlbum", type: "POST", contentType: "application/json", data: JSON.stringify({ Album: album, User: user, UserToken: userToken }), success: function (result) { alert(result.Result); } }); I assemble the individual types first and then combine them in the data: property of the $.ajax() call into the actual object passed to the server, that mimics the structure of PostAlbumRequest server class that has Album, User and UserToken properties. This works well enough but it gets tedious if you have to create Request and Response types for each method signature. If you have common parameters that are always passed (like you always pass an album or usertoken) you might be able to abstract this to use a single object that gets reused for all methods, but this gets confusing too: Overload a single 'parameter' too much and it becomes a nightmare to decipher what your method actual can use. Use JObject to parse multiple Property Values out of an Object If you recall, ASP.NET AJAX and WCF REST used a 'wrapper' object to make default AJAX calls. Rather than directly calling a service you always passed an object which contained properties for each parameter: { parm1: Value, parm2: Value2 } WCF REST/ASP.NET AJAX would then parse this top level property values and map them to the parameters of the endpoint method. This automatic type wrapping functionality is no longer available directly in Web API, but since Web API now uses JSON.NET for it's JSON serializer you can actually simulate that behavior with a little extra code. You can use the JObject class to receive a dynamic JSON result and then using the dynamic cast of JObject to walk through the child objects and even parse them into strongly typed objects. Here's how to do this on the API Controller end:[HttpPost] public string PostAlbum(JObject jsonData) { dynamic json = jsonData; JObject jalbum = json.Album; JObject juser = json.User; string token = json.UserToken; var album = jalbum.ToObject<Album>(); var user = juser.ToObject<User>(); return String.Format("{0} {1} {2}", album.AlbumName, user.Name, token); } This is clearly not as nice as having the parameters passed directly, but it works to allow you to pass multiple parameters and access them using Web API. JObject is JSON.NET's generic object container which sports a nice dynamic interface that allows you to walk through the object's properties using standard 'dot' object syntax. All you have to do is cast the object to dynamic to get access to the property interface of the JSON type. Additionally JObject also allows you to parse JObject instances into strongly typed objects, which enables us here to retrieve the two objects passed as parameters from this jquery code:var album = { AlbumName: "PowerAge", Entered: "1/1/1977" } var user = { Name: "Rick" } var userToken = "sekkritt"; $.ajax( { url: "samples/PostAlbum", type: "POST", contentType: "application/json", data: JSON.stringify({ Album: album, User: user, UserToken: userToken }), success: function (result) { alert(result); } }); Summary ASP.NET Web API brings many new features and many advantages over the older Microsoft AJAX and REST APIs, but realize that some things like passing multiple strongly typed object parameters will work a bit differently. It's not insurmountable, but just knowing what options are available to simulate this behavior is good to know. Now let me say here that it's probably not a good practice to pass a bunch of parameters to an API call. Ideally APIs should be closely factored to accept single parameters or a single content parameter at least along with some identifier parameters that can be passed on the querystring. But saying that doesn't mean that occasionally you don't run into a situation where you have the need to pass several objects to the server and all three of the options I mentioned might have merit in different situations. For now I'm sure the question of how to pass multiple parameters will come up quite a bit from people migrating WCF REST or ASP.NET AJAX code to Web API. At least there are options available to make it work.© Rick Strahl, West Wind Technologies, 2005-2012Posted in Web Api   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • 256 Windows Azure Worker Roles, Windows Kinect and a 90's Text-Based Ray-Tracer

    - by Alan Smith
    For a couple of years I have been demoing a simple render farm hosted in Windows Azure using worker roles and the Azure Storage service. At the start of the presentation I deploy an Azure application that uses 16 worker roles to render a 1,500 frame 3D ray-traced animation. At the end of the presentation, when the animation was complete, I would play the animation delete the Azure deployment. The standing joke with the audience was that it was that it was a “$2 demo”, as the compute charges for running the 16 instances for an hour was $1.92, factor in the bandwidth charges and it’s a couple of dollars. The point of the demo is that it highlights one of the great benefits of cloud computing, you pay for what you use, and if you need massive compute power for a short period of time using Windows Azure can work out very cost effective. The “$2 demo” was great for presenting at user groups and conferences in that it could be deployed to Azure, used to render an animation, and then removed in a one hour session. I have always had the idea of doing something a bit more impressive with the demo, and scaling it from a “$2 demo” to a “$30 demo”. The challenge was to create a visually appealing animation in high definition format and keep the demo time down to one hour.  This article will take a run through how I achieved this. Ray Tracing Ray tracing, a technique for generating high quality photorealistic images, gained popularity in the 90’s with companies like Pixar creating feature length computer animations, and also the emergence of shareware text-based ray tracers that could run on a home PC. In order to render a ray traced image, the ray of light that would pass from the view point must be tracked until it intersects with an object. At the intersection, the color, reflectiveness, transparency, and refractive index of the object are used to calculate if the ray will be reflected or refracted. Each pixel may require thousands of calculations to determine what color it will be in the rendered image. Pin-Board Toys Having very little artistic talent and a basic understanding of maths I decided to focus on an animation that could be modeled fairly easily and would look visually impressive. I’ve always liked the pin-board desktop toys that become popular in the 80’s and when I was working as a 3D animator back in the 90’s I always had the idea of creating a 3D ray-traced animation of a pin-board, but never found the energy to do it. Even if I had a go at it, the render time to produce an animation that would look respectable on a 486 would have been measured in months. PolyRay Back in 1995 I landed my first real job, after spending three years being a beach-ski-climbing-paragliding-bum, and was employed to create 3D ray-traced animations for a CD-ROM that school kids would use to learn physics. I had got into the strange and wonderful world of text-based ray tracing, and was using a shareware ray-tracer called PolyRay. PolyRay takes a text file describing a scene as input and, after a few hours processing on a 486, produced a high quality ray-traced image. The following is an example of a basic PolyRay scene file. background Midnight_Blue   static define matte surface { ambient 0.1 diffuse 0.7 } define matte_white texture { matte { color white } } define matte_black texture { matte { color dark_slate_gray } } define position_cylindrical 3 define lookup_sawtooth 1 define light_wood <0.6, 0.24, 0.1> define median_wood <0.3, 0.12, 0.03> define dark_wood <0.05, 0.01, 0.005>     define wooden texture { noise surface { ambient 0.2  diffuse 0.7  specular white, 0.5 microfacet Reitz 10 position_fn position_cylindrical position_scale 1  lookup_fn lookup_sawtooth octaves 1 turbulence 1 color_map( [0.0, 0.2, light_wood, light_wood] [0.2, 0.3, light_wood, median_wood] [0.3, 0.4, median_wood, light_wood] [0.4, 0.7, light_wood, light_wood] [0.7, 0.8, light_wood, median_wood] [0.8, 0.9, median_wood, light_wood] [0.9, 1.0, light_wood, dark_wood]) } } define glass texture { surface { ambient 0 diffuse 0 specular 0.2 reflection white, 0.1 transmission white, 1, 1.5 }} define shiny surface { ambient 0.1 diffuse 0.6 specular white, 0.6 microfacet Phong 7  } define steely_blue texture { shiny { color black } } define chrome texture { surface { color white ambient 0.0 diffuse 0.2 specular 0.4 microfacet Phong 10 reflection 0.8 } }   viewpoint {     from <4.000, -1.000, 1.000> at <0.000, 0.000, 0.000> up <0, 1, 0> angle 60     resolution 640, 480 aspect 1.6 image_format 0 }       light <-10, 30, 20> light <-10, 30, -20>   object { disc <0, -2, 0>, <0, 1, 0>, 30 wooden }   object { sphere <0.000, 0.000, 0.000>, 1.00 chrome } object { cylinder <0.000, 0.000, 0.000>, <0.000, 0.000, -4.000>, 0.50 chrome }   After setting up the background and defining colors and textures, the viewpoint is specified. The “camera” is located at a point in 3D space, and it looks towards another point. The angle, image resolution, and aspect ratio are specified. Two lights are present in the image at defined coordinates. The three objects in the image are a wooden disc to represent a table top, and a sphere and cylinder that intersect to form a pin that will be used for the pin board toy in the final animation. When the image is rendered, the following image is produced. The pins are modeled with a chrome surface, so they reflect the environment around them. Note that the scale of the pin shaft is not correct, this will be fixed later. Modeling the Pin Board The frame of the pin-board is made up of three boxes, and six cylinders, the front box is modeled using a clear, slightly reflective solid, with the same refractive index of glass. The other shapes are modeled as metal. object { box <-5.5, -1.5, 1>, <5.5, 5.5, 1.2> glass } object { box <-5.5, -1.5, -0.04>, <5.5, 5.5, -0.09> steely_blue } object { box <-5.5, -1.5, -0.52>, <5.5, 5.5, -0.59> steely_blue } object { cylinder <-5.2, -1.2, 1.4>, <-5.2, -1.2, -0.74>, 0.2 steely_blue } object { cylinder <5.2, -1.2, 1.4>, <5.2, -1.2, -0.74>, 0.2 steely_blue } object { cylinder <-5.2, 5.2, 1.4>, <-5.2, 5.2, -0.74>, 0.2 steely_blue } object { cylinder <5.2, 5.2, 1.4>, <5.2, 5.2, -0.74>, 0.2 steely_blue } object { cylinder <0, -1.2, 1.4>, <0, -1.2, -0.74>, 0.2 steely_blue } object { cylinder <0, 5.2, 1.4>, <0, 5.2, -0.74>, 0.2 steely_blue }   In order to create the matrix of pins that make up the pin board I used a basic console application with a few nested loops to create two intersecting matrixes of pins, which models the layout used in the pin boards. The resulting image is shown below. The pin board contains 11,481 pins, with the scene file containing 23,709 lines of code. For the complete animation 2,000 scene files will be created, which is over 47 million lines of code. Each pin in the pin-board will slide out a specific distance when an object is pressed into the back of the board. This is easily modeled by setting the Z coordinate of the pin to a specific value. In order to set all of the pins in the pin-board to the correct position, a bitmap image can be used. The position of the pin can be set based on the color of the pixel at the appropriate position in the image. When the Windows Azure logo is used to set the Z coordinate of the pins, the following image is generated. The challenge now was to make a cool animation. The Azure Logo is fine, but it is static. Using a normal video to animate the pins would not work; the colors in the video would not be the same as the depth of the objects from the camera. In order to simulate the pin board accurately a series of frames from a depth camera could be used. Windows Kinect The Kenect controllers for the X-Box 360 and Windows feature a depth camera. The Kinect SDK for Windows provides a programming interface for Kenect, providing easy access for .NET developers to the Kinect sensors. The Kinect Explorer provided with the Kinect SDK is a great starting point for exploring Kinect from a developers perspective. Both the X-Box 360 Kinect and the Windows Kinect will work with the Kinect SDK, the Windows Kinect is required for commercial applications, but the X-Box Kinect can be used for hobby projects. The Windows Kinect has the advantage of providing a mode to allow depth capture with objects closer to the camera, which makes for a more accurate depth image for setting the pin positions. Creating a Depth Field Animation The depth field animation used to set the positions of the pin in the pin board was created using a modified version of the Kinect Explorer sample application. In order to simulate the pin board accurately, a small section of the depth range from the depth sensor will be used. Any part of the object in front of the depth range will result in a white pixel; anything behind the depth range will be black. Within the depth range the pixels in the image will be set to RGB values from 0,0,0 to 255,255,255. A screen shot of the modified Kinect Explorer application is shown below. The Kinect Explorer sample application was modified to include slider controls that are used to set the depth range that forms the image from the depth stream. This allows the fine tuning of the depth image that is required for simulating the position of the pins in the pin board. The Kinect Explorer was also modified to record a series of images from the depth camera and save them as a sequence JPEG files that will be used to animate the pins in the animation the Start and Stop buttons are used to start and stop the image recording. En example of one of the depth images is shown below. Once a series of 2,000 depth images has been captured, the task of creating the animation can begin. Rendering a Test Frame In order to test the creation of frames and get an approximation of the time required to render each frame a test frame was rendered on-premise using PolyRay. The output of the rendering process is shown below. The test frame contained 23,629 primitive shapes, most of which are the spheres and cylinders that are used for the 11,800 or so pins in the pin board. The 1280x720 image contains 921,600 pixels, but as anti-aliasing was used the number of rays that were calculated was 4,235,777, with 3,478,754,073 object boundaries checked. The test frame of the pin board with the depth field image applied is shown below. The tracing time for the test frame was 4 minutes 27 seconds, which means rendering the2,000 frames in the animation would take over 148 hours, or a little over 6 days. Although this is much faster that an old 486, waiting almost a week to see the results of an animation would make it challenging for animators to create, view, and refine their animations. It would be much better if the animation could be rendered in less than one hour. Windows Azure Worker Roles The cost of creating an on-premise render farm to render animations increases in proportion to the number of servers. The table below shows the cost of servers for creating a render farm, assuming a cost of $500 per server. Number of Servers Cost 1 $500 16 $8,000 256 $128,000   As well as the cost of the servers, there would be additional costs for networking, racks etc. Hosting an environment of 256 servers on-premise would require a server room with cooling, and some pretty hefty power cabling. The Windows Azure compute services provide worker roles, which are ideal for performing processor intensive compute tasks. With the scalability available in Windows Azure a job that takes 256 hours to complete could be perfumed using different numbers of worker roles. The time and cost of using 1, 16 or 256 worker roles is shown below. Number of Worker Roles Render Time Cost 1 256 hours $30.72 16 16 hours $30.72 256 1 hour $30.72   Using worker roles in Windows Azure provides the same cost for the 256 hour job, irrespective of the number of worker roles used. Provided the compute task can be broken down into many small units, and the worker role compute power can be used effectively, it makes sense to scale the application so that the task is completed quickly, making the results available in a timely fashion. The task of rendering 2,000 frames in an animation is one that can easily be broken down into 2,000 individual pieces, which can be performed by a number of worker roles. Creating a Render Farm in Windows Azure The architecture of the render farm is shown in the following diagram. The render farm is a hybrid application with the following components: ·         On-Premise o   Windows Kinect – Used combined with the Kinect Explorer to create a stream of depth images. o   Animation Creator – This application uses the depth images from the Kinect sensor to create scene description files for PolyRay. These files are then uploaded to the jobs blob container, and job messages added to the jobs queue. o   Process Monitor – This application queries the role instance lifecycle table and displays statistics about the render farm environment and render process. o   Image Downloader – This application polls the image queue and downloads the rendered animation files once they are complete. ·         Windows Azure o   Azure Storage – Queues and blobs are used for the scene description files and completed frames. A table is used to store the statistics about the rendering environment.   The architecture of each worker role is shown below.   The worker role is configured to use local storage, which provides file storage on the worker role instance that can be use by the applications to render the image and transform the format of the image. The service definition for the worker role with the local storage configuration highlighted is shown below. <?xml version="1.0" encoding="utf-8"?> <ServiceDefinition name="CloudRay" >   <WorkerRole name="CloudRayWorkerRole" vmsize="Small">     <Imports>     </Imports>     <ConfigurationSettings>       <Setting name="DataConnectionString" />     </ConfigurationSettings>     <LocalResources>       <LocalStorage name="RayFolder" cleanOnRoleRecycle="true" />     </LocalResources>   </WorkerRole> </ServiceDefinition>     The two executable programs, PolyRay.exe and DTA.exe are included in the Azure project, with Copy Always set as the property. PolyRay will take the scene description file and render it to a Truevision TGA file. As the TGA format has not seen much use since the mid 90’s it is converted to a JPG image using Dave's Targa Animator, another shareware application from the 90’s. Each worker roll will use the following process to render the animation frames. 1.       The worker process polls the job queue, if a job is available the scene description file is downloaded from blob storage to local storage. 2.       PolyRay.exe is started in a process with the appropriate command line arguments to render the image as a TGA file. 3.       DTA.exe is started in a process with the appropriate command line arguments convert the TGA file to a JPG file. 4.       The JPG file is uploaded from local storage to the images blob container. 5.       A message is placed on the images queue to indicate a new image is available for download. 6.       The job message is deleted from the job queue. 7.       The role instance lifecycle table is updated with statistics on the number of frames rendered by the worker role instance, and the CPU time used. The code for this is shown below. public override void Run() {     // Set environment variables     string polyRayPath = Path.Combine(Environment.GetEnvironmentVariable("RoleRoot"), PolyRayLocation);     string dtaPath = Path.Combine(Environment.GetEnvironmentVariable("RoleRoot"), DTALocation);       LocalResource rayStorage = RoleEnvironment.GetLocalResource("RayFolder");     string localStorageRootPath = rayStorage.RootPath;       JobQueue jobQueue = new JobQueue("renderjobs");     JobQueue downloadQueue = new JobQueue("renderimagedownloadjobs");     CloudRayBlob sceneBlob = new CloudRayBlob("scenes");     CloudRayBlob imageBlob = new CloudRayBlob("images");     RoleLifecycleDataSource roleLifecycleDataSource = new RoleLifecycleDataSource();       Frames = 0;       while (true)     {         // Get the render job from the queue         CloudQueueMessage jobMsg = jobQueue.Get();           if (jobMsg != null)         {             // Get the file details             string sceneFile = jobMsg.AsString;             string tgaFile = sceneFile.Replace(".pi", ".tga");             string jpgFile = sceneFile.Replace(".pi", ".jpg");               string sceneFilePath = Path.Combine(localStorageRootPath, sceneFile);             string tgaFilePath = Path.Combine(localStorageRootPath, tgaFile);             string jpgFilePath = Path.Combine(localStorageRootPath, jpgFile);               // Copy the scene file to local storage             sceneBlob.DownloadFile(sceneFilePath);               // Run the ray tracer.             string polyrayArguments =                 string.Format("\"{0}\" -o \"{1}\" -a 2", sceneFilePath, tgaFilePath);             Process polyRayProcess = new Process();             polyRayProcess.StartInfo.FileName =                 Path.Combine(Environment.GetEnvironmentVariable("RoleRoot"), polyRayPath);             polyRayProcess.StartInfo.Arguments = polyrayArguments;             polyRayProcess.Start();             polyRayProcess.WaitForExit();               // Convert the image             string dtaArguments =                 string.Format(" {0} /FJ /P{1}", tgaFilePath, Path.GetDirectoryName (jpgFilePath));             Process dtaProcess = new Process();             dtaProcess.StartInfo.FileName =                 Path.Combine(Environment.GetEnvironmentVariable("RoleRoot"), dtaPath);             dtaProcess.StartInfo.Arguments = dtaArguments;             dtaProcess.Start();             dtaProcess.WaitForExit();               // Upload the image to blob storage             imageBlob.UploadFile(jpgFilePath);               // Add a download job.             downloadQueue.Add(jpgFile);               // Delete the render job message             jobQueue.Delete(jobMsg);               Frames++;         }         else         {             Thread.Sleep(1000);         }           // Log the worker role activity.         roleLifecycleDataSource.Alive             ("CloudRayWorker", RoleLifecycleDataSource.RoleLifecycleId, Frames);     } }     Monitoring Worker Role Instance Lifecycle In order to get more accurate statistics about the lifecycle of the worker role instances used to render the animation data was tracked in an Azure storage table. The following class was used to track the worker role lifecycles in Azure storage.   public class RoleLifecycle : TableServiceEntity {     public string ServerName { get; set; }     public string Status { get; set; }     public DateTime StartTime { get; set; }     public DateTime EndTime { get; set; }     public long SecondsRunning { get; set; }     public DateTime LastActiveTime { get; set; }     public int Frames { get; set; }     public string Comment { get; set; }       public RoleLifecycle()     {     }       public RoleLifecycle(string roleName)     {         PartitionKey = roleName;         RowKey = Utils.GetAscendingRowKey();         Status = "Started";         StartTime = DateTime.UtcNow;         LastActiveTime = StartTime;         EndTime = StartTime;         SecondsRunning = 0;         Frames = 0;     } }     A new instance of this class is created and added to the storage table when the role starts. It is then updated each time the worker renders a frame to record the total number of frames rendered and the total processing time. These statistics are used be the monitoring application to determine the effectiveness of use of resources in the render farm. Rendering the Animation The Azure solution was deployed to Windows Azure with the service configuration set to 16 worker role instances. This allows for the application to be tested in the cloud environment, and the performance of the application determined. When I demo the application at conferences and user groups I often start with 16 instances, and then scale up the application to the full 256 instances. The configuration to run 16 instances is shown below. <?xml version="1.0" encoding="utf-8"?> <ServiceConfiguration serviceName="CloudRay" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration" osFamily="1" osVersion="*">   <Role name="CloudRayWorkerRole">     <Instances count="16" />     <ConfigurationSettings>       <Setting name="DataConnectionString"         value="DefaultEndpointsProtocol=https;AccountName=cloudraydata;AccountKey=..." />     </ConfigurationSettings>   </Role> </ServiceConfiguration>     About six minutes after deploying the application the first worker roles become active and start to render the first frames of the animation. The CloudRay Monitor application displays an icon for each worker role instance, with a number indicating the number of frames that the worker role has rendered. The statistics on the left show the number of active worker roles and statistics about the render process. The render time is the time since the first worker role became active; the CPU time is the total amount of processing time used by all worker role instances to render the frames.   Five minutes after the first worker role became active the last of the 16 worker roles activated. By this time the first seven worker roles had each rendered one frame of the animation.   With 16 worker roles u and running it can be seen that one hour and 45 minutes CPU time has been used to render 32 frames with a render time of just under 10 minutes.     At this rate it would take over 10 hours to render the 2,000 frames of the full animation. In order to complete the animation in under an hour more processing power will be required. Scaling the render farm from 16 instances to 256 instances is easy using the new management portal. The slider is set to 256 instances, and the configuration saved. We do not need to re-deploy the application, and the 16 instances that are up and running will not be affected. Alternatively, the configuration file for the Azure service could be modified to specify 256 instances.   <?xml version="1.0" encoding="utf-8"?> <ServiceConfiguration serviceName="CloudRay" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration" osFamily="1" osVersion="*">   <Role name="CloudRayWorkerRole">     <Instances count="256" />     <ConfigurationSettings>       <Setting name="DataConnectionString"         value="DefaultEndpointsProtocol=https;AccountName=cloudraydata;AccountKey=..." />     </ConfigurationSettings>   </Role> </ServiceConfiguration>     Six minutes after the new configuration has been applied 75 new worker roles have activated and are processing their first frames.   Five minutes later the full configuration of 256 worker roles is up and running. We can see that the average rate of frame rendering has increased from 3 to 12 frames per minute, and that over 17 hours of CPU time has been utilized in 23 minutes. In this test the time to provision 140 worker roles was about 11 minutes, which works out at about one every five seconds.   We are now half way through the rendering, with 1,000 frames complete. This has utilized just under three days of CPU time in a little over 35 minutes.   The animation is now complete, with 2,000 frames rendered in a little over 52 minutes. The CPU time used by the 256 worker roles is 6 days, 7 hours and 22 minutes with an average frame rate of 38 frames per minute. The rendering of the last 1,000 frames took 16 minutes 27 seconds, which works out at a rendering rate of 60 frames per minute. The frame counts in the server instances indicate that the use of a queue to distribute the workload has been very effective in distributing the load across the 256 worker role instances. The first 16 instances that were deployed first have rendered between 11 and 13 frames each, whilst the 240 instances that were added when the application was scaled have rendered between 6 and 9 frames each.   Completed Animation I’ve uploaded the completed animation to YouTube, a low resolution preview is shown below. Pin Board Animation Created using Windows Kinect and 256 Windows Azure Worker Roles   The animation can be viewed in 1280x720 resolution at the following link: http://www.youtube.com/watch?v=n5jy6bvSxWc Effective Use of Resources According to the CloudRay monitor statistics the animation took 6 days, 7 hours and 22 minutes CPU to render, this works out at 152 hours of compute time, rounded up to the nearest hour. As the usage for the worker role instances are billed for the full hour, it may have been possible to render the animation using fewer than 256 worker roles. When deciding the optimal usage of resources, the time required to provision and start the worker roles must also be considered. In the demo I started with 16 worker roles, and then scaled the application to 256 worker roles. It would have been more optimal to start the application with maybe 200 worker roles, and utilized the full hour that I was being billed for. This would, however, have prevented showing the ease of scalability of the application. The new management portal displays the CPU usage across the worker roles in the deployment. The average CPU usage across all instances is 93.27%, with over 99% used when all the instances are up and running. This shows that the worker role resources are being used very effectively. Grid Computing Scenarios Although I am using this scenario for a hobby project, there are many scenarios where a large amount of compute power is required for a short period of time. Windows Azure provides a great platform for developing these types of grid computing applications, and can work out very cost effective. ·         Windows Azure can provide massive compute power, on demand, in a matter of minutes. ·         The use of queues to manage the load balancing of jobs between role instances is a simple and effective solution. ·         Using a cloud-computing platform like Windows Azure allows proof-of-concept scenarios to be tested and evaluated on a very low budget. ·         No charges for inbound data transfer makes the uploading of large data sets to Windows Azure Storage services cost effective. (Transaction charges still apply.) Tips for using Windows Azure for Grid Computing Scenarios I found the implementation of a render farm using Windows Azure a fairly simple scenario to implement. I was impressed by ease of scalability that Azure provides, and by the short time that the application took to scale from 16 to 256 worker role instances. In this case it was around 13 minutes, in other tests it took between 10 and 20 minutes. The following tips may be useful when implementing a grid computing project in Windows Azure. ·         Using an Azure Storage queue to load-balance the units of work across multiple worker roles is simple and very effective. The design I have used in this scenario could easily scale to many thousands of worker role instances. ·         Windows Azure accounts are typically limited to 20 cores. If you need to use more than this, a call to support and a credit card check will be required. ·         Be aware of how the billing model works. You will be charged for worker role instances for the full clock our in which the instance is deployed. Schedule the workload to start just after the clock hour has started. ·         Monitor the utilization of the resources you are provisioning, ensure that you are not paying for worker roles that are idle. ·         If you are deploying third party applications to worker roles, you may well run into licensing issues. Purchasing software licenses on a per-processor basis when using hundreds of processors for a short time period would not be cost effective. ·         Third party software may also require installation onto the worker roles, which can be accomplished using start-up tasks. Bear in mind that adding a startup task and possible re-boot will add to the time required for the worker role instance to start and activate. An alternative may be to use a prepared VM and use VM roles. ·         Consider using the Windows Azure Autoscaling Application Block (WASABi) to autoscale the worker roles in your application. When using a large number of worker roles, the utilization must be carefully monitored, if the scaling algorithms are not optimal it could get very expensive!

    Read the article

  • How mod_cache working with "must-revalidate" and "max-age"?

    - by Dmitriy Sosunov
    Quick question before I will explain my flow: ?an mod_cache perform revalidate with if-none-match only if max-age is expired in case if it configured in reverse proxy mode? My goal is to reduce a number of revalidation requests to our the origin server. For instance: The first request goes to the origin server and then mod_cache save a response in to the cache according to header cache-control: max-age. And only when max-age is expired then mod_cache will revalidate with if-none-match. Currently, mod_cache revalidate each request, regardless that max-age is defined or not. My configuration of Apache 2.4.3 (Windows), on linux I see the same behavior that I will show below. ServerName proxy.lo ProxyRequests Off ProxyPreserveHost Off Header set Vary "Accept, Content-Type, Content-Encoding, Accept-Language" RequestHeader set X-Forwarded-Proto "http" # modify header for user agent's Header set Cache-Control "private, no-cache, no-store, no-transform" CacheQuickHandler off CacheDefaultExpire 300 # the origin server do not provide last-modified CacheIgnoreNoLastMod On CacheIgnoreCacheControl On # the origin server define cache-control: private, no-store only for user agents # Therefore, I would like ignore those headers on the proxy server. CacheStorePrivate On CacheStoreNoStore On CacheEnable disk / CacheRoot "C:/Apache.Cache" CacheDirLevels 5 CacheDirLength 4 CacheMinExpire 15 CacheDetailHeader on CacheHeader on KeepAlive Off ProxyPass / http://origin.lo/ ProxyPassReverse / http://origin.lo/ Also, I have turned on debug log level to see how mod_cache handles a content for caching: I provided this to show that mod_proxy always decides that a content isn't fresh. Why?I provided this to show that mod_proxy always decide that a content isn't fresh. Why? max-age was provided (see below). [Sun Nov 04 11:58:42.899890 2012] [cache:debug] [pid 6492:tid 1400] cache_storage.c(624): [client 192.168.1.100:63741] AH00698: cache: Key for entity /testpage?(null) is http://proxy.lo/testpage? [Sun Nov 04 11:58:42.899890 2012] [cache_disk:debug] [pid 6492:tid 1400] mod_cache_disk.c(569): [client 192.168.1.100:63741] AH00709: Recalled cached URL info header http://proxy.lo/testpage? [Sun Nov 04 11:58:42.899890 2012] [cache_disk:debug] [pid 6492:tid 1400] mod_cache_disk.c(865): [client 192.168.1.100:63741] AH00720: Recalled headers for URL http://proxy.lo/testpage? [Sun Nov 04 11:58:42.899890 2012] [cache:debug] [pid 6492:tid 1400] cache_storage.c(320): [client 192.168.1.100:63741] AH00695: Cached response for /testpage isn't fresh. Adding/replacing conditional request headers. [Sun Nov 04 11:58:42.899890 2012] [cache:debug] [pid 6492:tid 1400] mod_cache.c(414): [client 192.168.1.100:63741] AH00757: Adding CACHE_SAVE filter for /testpage [Sun Nov 04 11:58:42.899890 2012] [cache:debug] [pid 6492:tid 1400] mod_cache.c(448): [client 192.168.1.100:63741] AH00759: Adding CACHE_REMOVE_URL filter for /testpage [Sun Nov 04 11:58:42.899890 2012] [proxy:debug] [pid 6492:tid 1400] mod_proxy.c(1068): [client 192.168.1.100:63741] AH01143: Running scheme http handler (attempt 0) [Sun Nov 04 11:58:42.899890 2012] [proxy:debug] [pid 6492:tid 1400] proxy_util.c(1976): AH00942: HTTP: has acquired connection for (origin.lo) [Sun Nov 04 11:58:42.899890 2012] [proxy:debug] [pid 6492:tid 1400] proxy_util.c(2029): [client 192.168.1.100:63741] AH00944: connecting http://origin.lo/testpage to origin.lo:80 [Sun Nov 04 11:58:42.901890 2012] [proxy:debug] [pid 6492:tid 1400] proxy_util.c(2151): [client 192.168.1.100:63741] AH00947: connected /testpage to origin.lo:80 [Sun Nov 04 11:58:42.901890 2012] [proxy:debug] [pid 6492:tid 1400] proxy_util.c(2554): AH00962: HTTP: connection complete to 192.168.1.100:80 (origin.lo) [Sun Nov 04 11:58:42.903890 2012] [proxy:debug] [pid 6492:tid 1400] proxy_util.c(1991): AH00943: http: has released connection for (origin.lo) [Sun Nov 04 11:58:42.903890 2012] [headers:debug] [pid 6492:tid 1400] mod_headers.c(800): AH01502: headers: ap_headers_output_filter() [Sun Nov 04 11:58:42.903890 2012] [cache:debug] [pid 6492:tid 1400] mod_cache.c(1190): [client 192.168.1.100:63741] AH00769: cache: Caching url: /testpage [Sun Nov 04 11:58:42.903890 2012] [cache:debug] [pid 6492:tid 1400] mod_cache.c(1196): [client 192.168.1.100:63741] AH00770: cache: Removing CACHE_REMOVE_URL filter. [Sun Nov 04 11:58:42.904890 2012] [cache_disk:debug] [pid 6492:tid 1400] mod_cache_disk.c(1318): [client 192.168.1.100:63741] AH00737: commit_entity: Headers and body for URL http://proxy.lo/testpage? cached. The first request to the origin server without mod_proxy to http://origin.lo/ GET http://origin.lo/testpage HTTP/1.1 Host: origin.lo Connection: keep-alive User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.4 (KHTML, like Gecko) Chrome/22.0.1229.94 Safari/537.4 Accept: application/json Accept-Encoding: gzip,deflate,sdch Accept-Language: en-US,en;q=0.8 Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.3 The first response from the origin without mod_proxy HTTP/1.1 200 OK Cache-Control: must-revalidate, proxy-revalidate, max-age=30 Content-Type: application/json; charset=utf-8 ETag: "7cf651e2-176f-4ac1-808e-0e0c17cfd0a2" Server: Microsoft-IIS/7.5 X-AspNet-Version: 4.0.30319 X-Powered-By: ASP.NET Date: Sun, 04 Nov 2012 10:11:01 GMT Content-Length: 1877 So, I assumed that revalidation must be occur only in 30 seconds after the success response. Is't right? Let's check it:) Within 30 sec, the Google Chrome didn't perform any requests to the origin server to revalidate a request and has return the response from local cache. When max-age is expired, the Google Chrome perform a request to revalidate: GET http://origin.lo/testpage HTTP/1.1 Host: origin.lo Connection: keep-alive Cache-Control: max-age=0 User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.4 (KHTML, like Gecko) Chrome/22.0.1229.94 Safari/537.4 Accept: application/xml If-None-Match: "7cf651e2-176f-4ac1-808e-0e0c17cfd0a2" Accept-Encoding: gzip,deflate,sdch Accept-Language: en-US,en;q=0.8 Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.3 and response: HTTP/1.1 304 Not Modified Cache-Control: must-revalidate, proxy-revalidate, max-age=30 ETag: "7cf651e2-176f-4ac1-808e-0e0c17cfd0a2" Server: Microsoft-IIS/7.5 X-AspNet-Version: 4.0.30319 X-Powered-By: ASP.NET Date: Sun, 04 Nov 2012 10:16:20 GMT As you can see, all works as expected. User agent revalidates request only when max-age is expired. Let's now try perform the folling flow though mod_proxy (see configuration above). The first request: GET http://proxy.lo/testpage HTTP/1.1 Host: proxy.lo Connection: keep-alive User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.4 (KHTML, like Gecko) Chrome/22.0.1229.94 Safari/537.4 Accept: application/json Accept-Encoding: gzip,deflate,sdch Accept-Language: en-US,en;q=0.8 Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.3 and the response was: HTTP/1.1 200 OK Date: Sun, 04 Nov 2012 10:23:36 GMT Server: Apache Cache-Control: private, no-cache, no-store, no-transform Content-Type: application/json; charset=utf-8 ETag: "7cf651e2-176f-4ac1-808e-0e0c17cfd0a2" Content-Length: 1932 Vary: Accept,Content-Type,Content-Encoding,Accept-Language X-Cache: MISS from proxy.lo X-Cache-Detail: "cache miss: attempting entity save" from proxy.lo Connection: close Ok, let's see to the disk cache and try to see how request and response was stored. (I cut binary data) http://proxy.lo/testpage? Cache-Control: private, no-cache, no-store, no-transform Content-Type: application/json; charset=utf-8 ETag: "7cf651e2-176f-4ac1-808e-0e0c17cfd0a2" Date: Sun, 04 Nov 2012 10:27:15 GMT Content-Length: 1932 Vary: Accept, Content-Type, Content-Encoding, Accept-Language Host: proxy.lo User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.4 (KHTML, like Gecko) Chrome/22.0.1229.94 Safari/537.4 Accept: application/json Accept-Encoding: gzip,deflate,sdch Accept-Language: en-US,en;q=0.8 Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.3 X-Forwarded-Proto: http Cache-Control: max-age=300, must-revalidate X-Forwarded-For: 192.168.1.100 X-Forwarded-Host: proxy.lo X-Forwarded-Server: origin.lo Ok, what we see? We see that the first request was performed with max-age=300 & must-revalidate Ok, looks good, as for me, lets perform the next call: GET http://proxy.lo/testpage HTTP/1.1 Host: proxy.lo Connection: keep-alive User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.4 (KHTML, like Gecko) Chrome/22.0.1229.94 Safari/537.4 Accept: application/json Accept-Encoding: gzip,deflate,sdch Accept-Language: en-US,en;q=0.8 Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.3 and the second response from mod_proxy: HTTP/1.1 200 OK Date: Sun, 04 Nov 2012 10:31:58 GMT Server: Apache Cache-Control: private, no-cache, no-store, no-transform ETag: "7cf651e2-176f-4ac1-808e-0e0c17cfd0a2" Content-Length: 1932 Vary: Accept,Content-Type,Content-Encoding,Accept-Language X-Cache: REVALIDATE from proxy.lo X-Cache-Detail: "conditional cache hit: entity refreshed" from proxy.lo Connection: close Content-Type: application/json; charset=utf-8 SO, MY QUESTION IS: WHY mod_proxy perform revalidation on each request regardless that max-age is defined? N.B. Apache 2.4.3 Thanks, I would be grateful for any help.

    Read the article

  • Open Source but not Free Software (or vice versa)

    - by TRiG
    The definition of "Free Software" from the Free Software Foundation: “Free software” is a matter of liberty, not price. To understand the concept, you should think of “free” as in “free speech,” not as in “free beer.” Free software is a matter of the users' freedom to run, copy, distribute, study, change and improve the software. More precisely, it means that the program's users have the four essential freedoms: The freedom to run the program, for any purpose (freedom 0). The freedom to study how the program works, and change it to make it do what you wish (freedom 1). Access to the source code is a precondition for this. The freedom to redistribute copies so you can help your neighbor (freedom 2). The freedom to distribute copies of your modified versions to others (freedom 3). By doing this you can give the whole community a chance to benefit from your changes. Access to the source code is a precondition for this. A program is free software if users have all of these freedoms. Thus, you should be free to redistribute copies, either with or without modifications, either gratis or charging a fee for distribution, to anyone anywhere. Being free to do these things means (among other things) that you do not have to ask or pay for permission to do so. The definition of "Open Source Software" from the Open Source Initiative: Open source doesn't just mean access to the source code. The distribution terms of open-source software must comply with the following criteria: Free Redistribution The license shall not restrict any party from selling or giving away the software as a component of an aggregate software distribution containing programs from several different sources. The license shall not require a royalty or other fee for such sale. Source Code The program must include source code, and must allow distribution in source code as well as compiled form. Where some form of a product is not distributed with source code, there must be a well-publicized means of obtaining the source code for no more than a reasonable reproduction cost preferably, downloading via the Internet without charge. The source code must be the preferred form in which a programmer would modify the program. Deliberately obfuscated source code is not allowed. Intermediate forms such as the output of a preprocessor or translator are not allowed. Derived Works The license must allow modifications and derived works, and must allow them to be distributed under the same terms as the license of the original software. Integrity of The Author's Source Code The license may restrict source-code from being distributed in modified form only if the license allows the distribution of "patch files" with the source code for the purpose of modifying the program at build time. The license must explicitly permit distribution of software built from modified source code. The license may require derived works to carry a different name or version number from the original software. No Discrimination Against Persons or Groups The license must not discriminate against any person or group of persons. No Discrimination Against Fields of Endeavor The license must not restrict anyone from making use of the program in a specific field of endeavor. For example, it may not restrict the program from being used in a business, or from being used for genetic research. Distribution of License The rights attached to the program must apply to all to whom the program is redistributed without the need for execution of an additional license by those parties. License Must Not Be Specific to a Product The rights attached to the program must not depend on the program's being part of a particular software distribution. If the program is extracted from that distribution and used or distributed within the terms of the program's license, all parties to whom the program is redistributed should have the same rights as those that are granted in conjunction with the original software distribution. License Must Not Restrict Other Software The license must not place restrictions on other software that is distributed along with the licensed software. For example, the license must not insist that all other programs distributed on the same medium must be open-source software. License Must Be Technology-Neutral No provision of the license may be predicated on any individual technology or style of interface. These definitions, although they derive from very different ideologies, are broadly compatible, and most Free Software is also Open Source Software and vice versa. I believe, however, that it is possible for this not to be the case: It is possible for software to be Open Source without being Free, or to be Free without being Open Source. Questions Is my belief correct? Is it possible for software to fall into one camp and not the other? Does any such software actually exist? Please give examples. Clarification I've already accepted an answer now, but I seem to have confused a lot of people, so perhaps a clarification is in order. I was not asking about the difference between copyleft (or "viral", though I don't like that term) and non-copyleft ("permissive") licenses. Nor was I asking about your personal idiosyncratic definitions of "Free" and "Open". I was asking about "Free Software as defined by the FSF" and "Open Source Software as defined by the OSI". Are the two always the same? Is it possible to be one without being the other? And the answer, it seems, is that it's impossible to be Free without being Open, but possible to be Open without being Free. Thank you everyone who actually answered the question.

    Read the article

  • MSCC: Scripting - Administrator's­ toolbox of magic...

    Finally, we made it to have our April meetup - in May. The most obvious explanation is the increased amount of open source and IT activities that either the MSCC, the Linux User Group of Mauritius (LUGM), or the University of Mauritius Student's Computer Club is organising. It's absolutely incredible to see the recent hype of events here on the island. And I'm loving it! Unfortunately, we also had to deal with arranging for a location this time. It was kind of an odyssey as my requests (and phone calls) haven't been answered, even though I tried it several times - well, kind of disappointing and I have to look into that for future gatherings. In my opinion, it is essential that two parameters of a community meeting are fixed as early as possible: Location, and Date and time You can't just change one or both on the very last minute. Well, this time we had to do it due to unforeseen reasons, and I apologise to any MSCC member which couldn't make it to our April meetup. Okay, lesson learned but now back to the actual meetup report ... Shortly after the meeting I placed the following statement as my first impression: "Spontaneous and improvised :) No, seriously, Ish and Dan had well prepared presentations on shell scripting, mainly focused towards Bourne Again Shell (bash), and the pros and cons of scripting versus actually writing something in a decent programming language. I thought that I could cut myself out of the equation but the demand for information about PowerShell was higher than expected..." Well, it turned out that the interest in Windows PowerShell was high, as I even got a couple of questions on it via social media networks during the evening. I also like to mention that the number of attendees went back to what I would call a "standard" number of participation. This time there were 12 craftsmen, but again a good number of First Timers. Reactions of other attendees Here are some impressions and feedback from our participants: "Enjoyed the bash and powershell (linux / windows) presentations ..." -- Nadim on event comments "He [Daniel] also showed us some syntax loopholes in Bash that could leave someone with bad code." -- Ish on MSCC – Let's talk about Scripting   Glad to see a couple of first time attendees, especially students from the university itself. Some details on the presentations MSCC: First time visit at the University of Mauritius - Phase II Engineering Tower, room 2.9 Gimme some love ... bash and other shells Ish gave a great introduction into shell scripting as he spoke about existing shell environments and a little bit about their history. Furthermore, he talked about various built-in commands, the use of coreutils, the ability to daisy-chain multiple commands using pipes, the importance of the standard I/O streams and their file descriptors in advanced scripting techniques. Combined with a couple of sample statements in the Linux terminal on Ubuntu 14.04 machine it was a solid presentation. Have a closer look at his slides - published on his blog on MSCC – Let's talk about Scripting. Oddities of scripting After the brief introduction into bash it was Daniel's turn to highlight a good number of oddities when working with shell scripts. First of all, it should be clear that scripting is not supposed for any kind of implementations in terms of software but simply to automate administrative procedures and to simplify routine jobs on a system. One of the cool oddities that he mentioned is that everything (!) in a shell is represented by strings; there are no other types like integer, float, date-time, etc. that you'd like to use in a full-fledged programming language. Let's have a look at his sample:  more to come... What's the output? As a conclusion, Daniel suggests that shell scripting should be limited but not restricted to automatic repetitive command stacks and batch jobs, startup wrapper for applications in order to set up the execution environment, and other not too sophisticated jobs. But as soon as it might involve a little bit more logic or you might rely on performance it's better to write an application in Ruby, Python, or Perl (among others of course). This is also enables the possibility to test your code properly. MSCC: Ish talking about Bourne Again Shell (bash) and shell scripting to automate regular tasks MSCC: Daniel gives an overview about the pros and cons of shell scripting versus programming MSCC: PowerShell as your scripting solution on Windows operating systems The path of the Enlightened is long ... and tough. Honestly, even though PowerShell was mentioned without any further details on the meetup's agenda, I didn't expect that there would be demand to give a presentation on Microsoft PowerShell after all. I already took this topic out of the announcement but the audience wanted to have some information. Okay, then let's see what I could do - improvised style. While my machine booted and got hooked up to the projector, I started to talk about the beginnings of PowerShell from back in 2006, and its predecessors MS DOS and Command Prompt. A throwback in history... always good for young people. As usual, Microsoft didn't get it at that time. Instead of listening to their client's needs and demands they ignored the feasibility to administrate Windows server farms without any UI tools. PowerShell is actually a result of this, and seeing that shell scripting is a common, reliable and fast way in an administrator's toolbox for decades, Microsoft had to adapt from their Microsoft Management Console (MMC) to a broader approach. It's not like shell scripting was something new; it is in daily use by alternative operating systems like AIX, HP UX, Solaris, and last but not least Linux. Most interestingly, Microsoft is very good at renovating existing architectures, and over the years PowerShell not only replaced their own combination of Command Prompt and Scripting Hosts (VBScript and CScript) but really turned into a challenging competitor on the market. The shell is easy to extend with cmdlets, and open to other Microsoft products like SQL Server, SharePoint, as well as Third-party software applications. Similar to MMC PowerShell also offers the ability to administer other machine remotely - only without a graphical user interface and therefore it's easier to automate and schedule regular tasks. Following is a sample of a PowerShell script file (extension .ps1): $strComputer = "." $colItems = get-wmiobject -class Win32_BIOS -namespace root\CIMV2 -comp $strComputer foreach ($objItem in $colItems) {write-host "BIOS Characteristics: " $objItem.BiosCharacteristicswrite-host "BIOS Version: " $objItem.BIOSVersionwrite-host "Build Number: " $objItem.BuildNumberwrite-host "Caption: " $objItem.Captionwrite-host "Code Set: " $objItem.CodeSetwrite-host "Current Language: " $objItem.CurrentLanguagewrite-host "Description: " $objItem.Descriptionwrite-host "Identification Code: " $objItem.IdentificationCodewrite-host "Installable Languages: " $objItem.InstallableLanguageswrite-host "Installation Date: " $objItem.InstallDatewrite-host "Language Edition: " $objItem.LanguageEditionwrite-host "List Of Languages: " $objItem.ListOfLanguageswrite-host "Manufacturer: " $objItem.Manufacturerwrite-host "Name: " $objItem.Namewrite-host "Other Target Operating System: " $objItem.OtherTargetOSwrite-host "Primary BIOS: " $objItem.PrimaryBIOSwrite-host "Release Date: " $objItem.ReleaseDatewrite-host "Serial Number: " $objItem.SerialNumberwrite-host "SMBIOS BIOS Version: " $objItem.SMBIOSBIOSVersionwrite-host "SMBIOS Major Version: " $objItem.SMBIOSMajorVersionwrite-host "SMBIOS Minor Version: " $objItem.SMBIOSMinorVersionwrite-host "SMBIOS Present: " $objItem.SMBIOSPresentwrite-host "Software Element ID: " $objItem.SoftwareElementIDwrite-host "Software Element State: " $objItem.SoftwareElementStatewrite-host "Status: " $objItem.Statuswrite-host "Target Operating System: " $objItem.TargetOperatingSystemwrite-host "Version: " $objItem.Versionwrite-host} Which gives you information about your BIOS and Windows OS. Then change the computer name to another one on your network (NetBIOS based) and run the script again. There lots of samples and tutorials at the Microsoft Script Center, and I would advise you to pay a visit over there if you are more interested in PowerShell. The Script Center provides the download links, too. Upcoming Events What are the upcoming events here in Mauritius? So far, we have the following ones (incomplete list as usual) in chronological order: Hacking Defence (14. May 2014) WebCup Maurice (7. & 8. June 2014) Developers Conference (TBA ~ July 2014) Linuxfest 2014 (TBA ~ November 2014) Hopefully, there will be more announcements during the next couple of weeks and months. If you know about any other event, like a bootcamp, a code challenge or hackathon here in Mauritius, please drop me a note in the comment section below this article. Thanks! My resume of the day Spontaneous and improvised :) The new location at the University of Mauritius turned out very well, there is plenty of space, and it could be a good choice for future meetings. Especially, having the ability to get more and more students into our IT community sounds like a great opportunity. Later during the day, I got some promising mails from Nadim regarding future sessions at the local branch of the Middlesex University. Well, we will see in the future... But for now this will be on hold until approximately October when students resume their regular studies. Anyway, it was a good experience at the university, and thanks again to the UoM Student's Computer Club that made the necessary arrangements for the MSCC!

    Read the article

  • Binding data to subgrid

    - by bhargav
    i have a jqgrid with a subgrid...the databinding is done in javascript like this <script language="javascript" type="text/javascript"> var x = screen.width; $(document).ready(function () { $("#projgrid").jqGrid({ mtype: 'POST', datatype: function (pdata) { getData(pdata); }, colNames: ['Project ID', 'Due Date', 'Project Name', 'SalesRep', 'Organization:', 'Status', 'Active Value', 'Delete'], colModel: [ { name: 'Project ID', index: 'project_id', width: 12, align: 'left', key: true }, { name: 'Due Date', index: 'project_date_display', width: 15, align: 'left' }, { name: 'Project Name', index: 'project_title', width: 60, align: 'left' }, { name: 'SalesRep', index: 'Salesrep', width: 22, align: 'left' }, { name: 'Organization:', index: 'customer_company_name:', width: 56, align: 'left' }, { name: 'Status', index: 'Status', align: 'left', width: 15 }, { name: 'Active Value', index: 'Active Value', align: 'left', width: 10 }, { name: 'Delete', index: 'Delete', align: 'left', width: 10}], pager: '#proj_pager', rowList: [10, 20, 50], sortname: 'project_id', sortorder: 'asc', rowNum: 10, loadtext: "Loading....", subGrid: true, shrinkToFit: true, emptyrecords: "No records to view", width: x - 100, height: "100%", rownumbers: true, caption: 'Projects', subGridRowExpanded: function (subgrid_id, row_id) { var subgrid_table_id, pager_id; subgrid_table_id = subgrid_id + "_t"; pager_id = "p_" + subgrid_table_id; $("#" + subgrid_id).html("<table id='" + subgrid_table_id + "' class='scroll'></table><div id='" + pager_id + "' class='scroll'></div>"); jQuery("#" + subgrid_table_id).jqGrid({ mtype: 'POST', postData: { entityIndex: function () { return row_id } }, datatype: function (pdata) { getactionData(pdata); }, height: "100%", colNames: ['Event ID', 'Priority', 'Deadline', 'From Date', 'Title', 'Status', 'Hours', 'Contact From', 'Contact To'], colModel: [ { name: 'Event ID', index: 'Event ID' }, { name: 'Priority', index: 'IssueCode' }, { name: 'Deadline', index: 'IssueTitle' }, { name: 'From Date', index: 'From Date' }, { name: 'Title', index: 'Title' }, { name: 'Status', index: 'Status' }, { name: 'Hours', index: 'Hours' }, { name: 'Contact From', index: 'Contact From' }, { name: 'Contact To', index: 'Contact To' } ], caption: "Action Details", rowNum: 10, pager: '#actionpager', rowList: [10, 20, 30, 50], sortname: 'Event ID', sortorder: "desc", loadtext: "Loading....", shrinkToFit: true, emptyrecords: "No records to view", rownumbers: true, ondblClickRow: function (rowid) { } }); jQuery("#actiongrid").jqGrid('navGrid', '#actionpager', { edit: false, add: false, del: false, search: false }); } }); jQuery("#projgrid").jqGrid('navGrid', '#proj_pager', { edit: false, add: false, del: false, excel: true, search: false }); }); function getactionData(pdata) { var project_id = pdata.entityIndex(); var ChannelContact = document.getElementById('ctl00_ContentPlaceHolder2_ddlChannelContact').value; var HideCompleted = document.getElementById('ctl00_ContentPlaceHolder2_chkHideCompleted').checked; var Scm = document.getElementById('ctl00_ContentPlaceHolder2_chkScm').checked; var checkOnlyContact = document.getElementById('ctl00_ContentPlaceHolder2_chkOnlyContact').checked; var MerchantId = document.getElementById('ctl00_ContentPlaceHolder2_ucProjectDetail_hidden_MerchantId').value; var nrows = pdata.rows; var npage = pdata.page; var sortindex = pdata.sidx; var sortdir = pdata.sord; var path = "project_brow.aspx/GetActionDetails" $.ajax({ type: "POST", url: path, data: "{'project_id': '" + project_id + "','ChannelContact': '" + ChannelContact + "','HideCompleted': '" + HideCompleted + "','Scm': '" + Scm + "','checkOnlyContact': '" + checkOnlyContact + "','MerchantId': '" + MerchantId + "','nrows': '" + nrows + "','npage': '" + npage + "','sortindex': '" + sortindex + "','sortdir': '" + sortdir + "'}", contentType: "application/json; charset=utf-8", success: function (data, textStatus) { if (textStatus == "success") obj = jQuery.parseJSON(data.d) ReceivedData(obj); }, error: function (data, textStatus) { alert('An error has occured retrieving data!'); } }); } function ReceivedData(data) { var thegrid = jQuery("#actiongrid")[0]; thegrid.addJSONData(data); } function getData(pData) { var dtDateFrom = document.getElementById('ctl00_ContentPlaceHolder2_dtDateFrom_textBox').value; var dtDateTo = document.getElementById('ctl00_ContentPlaceHolder2_dtDateTo_textBox').value; var Status = document.getElementById('ctl00_ContentPlaceHolder2_ddlStatus').value; var Type = document.getElementById('ctl00_ContentPlaceHolder2_ddlType').value; var Channel = document.getElementById('ctl00_ContentPlaceHolder2_ddlChannel').value; var ChannelContact = document.getElementById('ctl00_ContentPlaceHolder2_ddlChannelContact').value; var Customers = document.getElementById('ctl00_ContentPlaceHolder2_txtCustomers').value; var KeywordSearch = document.getElementById('ctl00_ContentPlaceHolder2_txtKeywordSearch').value; var Scm = document.getElementById('ctl00_ContentPlaceHolder2_chkScm').checked; var HideCompleted = document.getElementById('ctl00_ContentPlaceHolder2_chkHideCompleted').checked; var SelectedCustomerId = document.getElementById("<%=hdnSelectedCustomerId.ClientID %>").value var MerchantId = document.getElementById('ctl00_ContentPlaceHolder2_ucProjectDetail_hidden_MerchantId').value; var nrows = pData.rows; var npage = pData.page; var sortindex = pData.sidx; var sortdir = pData.sord; PageMethods.GetProjectDetails(SelectedCustomerId, Customers, KeywordSearch, MerchantId, Channel, Status, Type, dtDateTo, dtDateFrom, ChannelContact, HideCompleted, Scm, nrows, npage, sortindex, sortdir, AjaxSucceeded, AjaxFailed); } function AjaxSucceeded(data) { var obj = jQuery.parseJSON(data) if (obj != null) { if (obj.records!="") { ReceivedClientData(obj); } else { alert('No Data Available to Display') } } } function AjaxFailed(data) { alert('An error has occured retrieving data!'); } function ReceivedClientData(data) { var thegrid = jQuery("#projgrid")[0]; thegrid.addJSONData(data); } </script> as u can see projgrid is my parent grid and action grid is my subgrid to be shown onclicking the '+' symbol Projgrid is binded and being displayed but when it comes to subgrid im able to get the data but the problem comes at the time of binding data to subgrid which is done in function named ReceivedData where you can see like this function ReceivedData(data) { var thegrid = jQuery("#actiongrid")[0]; thegrid.addJSONData(data); } "data" is what i wanted exactly but it cannot be binded to actiongrid which is the subgrid Thanx in advance for help

    Read the article

  • GoldenGate 12c Trail Encryption and Credentials with Oracle Wallet

    - by hamsun
    I have been asked more than once whether the Oracle Wallet supports GoldenGate trail encryption. Although GoldenGate has supported encryption with the ENCKEYS file for years, Oracle GoldenGate 12c now also supports encryption using the Oracle Wallet. This helps improve security and makes it easier to administer. Two types of wallets can be configured in Oracle GoldenGate 12c: The wallet that holds the master keys, used with trail or TCP/IP encryption and decryption, stored in the new 12c dirwlt/cwallet.sso file.   The wallet that holds the Oracle Database user IDs and passwords stored in the ‘credential store’ stored in the new 12c dircrd/cwallet.sso file.   A wallet can be created using a ‘create wallet’  command.  Adding a master key to an existing wallet is easy using ‘open wallet’ and ‘add masterkey’ commands.   GGSCI (EDLVC3R27P0) 42> open wallet Opened wallet at location 'dirwlt'. GGSCI (EDLVC3R27P0) 43> add masterkey Master key 'OGG_DEFAULT_MASTERKEY' added to wallet at location 'dirwlt'.   Existing GUI Wallet utilities that come with other products such as the Oracle Database “Oracle Wallet Manager” do not work on this version of the wallet. The default Oracle Wallet can be changed.   GGSCI (EDLVC3R27P0) 44> sh ls -ltr ./dirwlt/* -rw-r----- 1 oracle oinstall 685 May 30 05:24 ./dirwlt/cwallet.sso GGSCI (EDLVC3R27P0) 45> info masterkey Masterkey Name:                 OGG_DEFAULT_MASTERKEY Creation Date:                  Fri May 30 05:24:04 2014 Version:        Creation Date:                  Status: 1               Fri May 30 05:24:04 2014        Current   The second wallet file is used for the credential used to connect to a database, without exposing the user id or password. Once it is configured, this file can be copied so that credentials are available to connect to the source or target database.   GGSCI (EDLVC3R27P0) 48> sh cp ./dircrd/cwallet.sso $GG_EURO_HOME/dircrd GGSCI (EDLVC3R27P0) 49> sh ls -ltr ./dircrd/* -rw-r----- 1 oracle oinstall 709 May 28 05:39 ./dircrd/cwallet.sso   The encryption wallet file can also be copied to the target machine so the replicat has access to the master key to decrypt records that are encrypted in the trail. Similar to the old ENCKEYS file, the master keys wallet created on the source host must either be stored in a centrally available disk or copied to all GoldenGate target hosts. The wallet is in a platform-independent format, although it is not certified for the iSeries, z/OS, and NonStop platforms.   GGSCI (EDLVC3R27P0) 50> sh cp ./dirwlt/cwallet.sso $GG_EURO_HOME/dirwlt   The new 12c UserIdAlias parameter is used to locate the credential in the wallet so the source user id and password does not need to be stored as a parameter as long as it is in the wallet.   GGSCI (EDLVC3R27P0) 52> view param extwest extract extwest exttrail ./dirdat/ew useridalias gguamer table west.*; The EncryptTrail parameter is used to encrypt the trail using the Advanced Encryption Standard and can be used with a primary extract or pump extract. GGSCI (EDLVC3R27P0) 54> view param pwest extract pwest encrypttrail AES256 rmthost easthost, mgrport 15001 rmttrail ./dirdat/pe passthru table west.*;   Once the extracts are running, records can be encrypted using the wallet.   GGSCI (EDLVC3R27P0) 60> info extract *west EXTRACT    EXTWEST   Last Started 2014-05-30 05:26   Status RUNNING Checkpoint Lag       00:00:17 (updated 00:00:01 ago) Process ID           24982 Log Read Checkpoint  Oracle Integrated Redo Logs                      2014-05-30 05:25:53                      SCN 0.0 (0) EXTRACT    PWEST     Last Started 2014-05-30 05:26   Status RUNNING Checkpoint Lag       24:02:32 (updated 00:00:05 ago) Process ID           24983 Log Read Checkpoint  File ./dirdat/ew000004                      2014-05-29 05:23:34.748949  RBA 1483   The ‘info masterkey’ command is used to confirm the wallet contains the key after copying it to the target machine. The key is needed to decrypt the data in the trail before the replicat applies the changes to the target database.   GGSCI (EDLVC3R27P0) 41> open wallet Opened wallet at location 'dirwlt'. GGSCI (EDLVC3R27P0) 42> info masterkey Masterkey Name:                 OGG_DEFAULT_MASTERKEY Creation Date:                  Fri May 30 05:24:04 2014 Version:        Creation Date:                  Status: 1               Fri May 30 05:24:04 2014        Current   Once the replicat is running, records can be decrypted using the wallet.   GGSCI (EDLVC3R27P0) 44> info reast REPLICAT   REAST     Last Started 2014-05-30 05:28   Status RUNNING INTEGRATED Checkpoint Lag       00:00:00 (updated 00:00:02 ago) Process ID           25057 Log Read Checkpoint  File ./dirdat/pe000004                      2014-05-30 05:28:16.000000  RBA 1546   There is no need for the DecryptTrail parameter when using the Oracle Wallet, unlike when using the ENCKEYS file.   GGSCI (EDLVC3R27P0) 45> view params reast replicat reast assumetargetdefs discardfile ./dirrpt/reast.dsc, purge useridalias ggueuro map west.*, target east.*;   Once a record is inserted into the source table and committed, the encryption can be verified using logdump and then querying the target table.   AMER_SQL>insert into west.branch values (50, 80071); 1 row created.   AMER_SQL>commit; Commit complete.   The following encrypted record can be found using logdump. Logdump 40 >n 2014/05/30 05:28:30.001.154 Insert               Len    28 RBA 1546 Name: WEST.BRANCH After  Image:                                             Partition 4   G  s    0a3e 1ba3 d924 5c02 eade db3f 61a9 164d 8b53 4331 | .>...$\....?a..M.SC1   554f e65a 5185 0257                               | UO.ZQ..W  Bad compressed block, found length of  7075 (x1ba3), RBA 1546   GGS tokens: TokenID x52 'R' ORAROWID         Info x00  Length   20  4141 4157 7649 4141 4741 4141 4144 7541 4170 0001 | AAAWvIAAGAAAADuAAp..  TokenID x4c 'L' LOGCSN           Info x00  Length    7  3231 3632 3934 33                                 | 2162943  TokenID x36 '6' TRANID           Info x00  Length   10  3130 2e31 372e 3135 3031                          | 10.17.1501  The replicat automatically decrypted this record from the trail and then inserted the row to the target table using the wallet. This select verifies the row was inserted into the target database and the data is not encrypted. EURO_SQL>select * from branch where branch_number=50; BRANCH_NUMBER                  BRANCH_ZIP -------------                                   ----------    50                                              80071   Book a seat in an upcoming Oracle GoldenGate 12c: Fundamentals for Oracle course now to learn more about GoldenGate 12c new features including how to use GoldenGate with the Oracle wallet, credentials, integrated extracts, integrated replicats, the Oracle Universal Installer, and other new features. Looking for another course? View all Oracle GoldenGate training.   Randy Richeson joined Oracle University as a Senior Principal Instructor in March 2005. He is an Oracle Certified Professional (10g-12c) and a GoldenGate Certified Implementation Specialist (10-11g). He has taught GoldenGate since 2010 and also has experience teaching other technical curriculums including GoldenGate Monitor, Veridata, JD Edwards, PeopleSoft, and the Oracle Application Server.

    Read the article

  • Nested While Loop for mysql_fetch_array not working as I expected

    - by Ayush Saraf
    I m trying to make feed system for my website, in which i have got i data from the database using mysql_fetch_array system and created a while loop for mysql_fetch_array and inside tht loop i have repeated the same thing again another while loop, but the problem is that the nested while loop only execute once and dont repeat the rows.... here is the code i have used some function and OOP but u will get wht is the point! i thought of an alternative but not yet tried vaz i wanna the way out this way only i.e. using a for loopo insted of while loop that mite wrk... public function get_feeds_from_my_friends(){ global $Friends; global $User; $friend_list = $Friends->create_friend_list_ids(); $sql = "SELECT feeds.id, feeds.user_id, feeds.post, feeds.date FROM feeds WHERE feeds.user_id IN (". $friend_list. ") ORDER BY feeds.id DESC"; $result = mysql_query($sql); while ($rows = mysql_fetch_array($result)) { $id = $rows['user_id']; $dp = $User->user_detail_by_id($id, "dp"); $user_name = $User->full_name_by_id($id); $post_id = $rows['id']; $final_result = "<div class=\"sharedItem\">"; $final_result .= "<div class=\"item\">"; $final_result .= "<div class=\"imageHolder\"><img src=\"". $dp ."\" class=\"profilePicture\" /></div>"; $final_result .= "<div class=\"txtHolder\">"; $final_result .= "<div class=\"username\"> " . $user_name . "</div>"; $final_result .= "<div class=\"userfeed\"> " . $rows['post'] . "</div>"; $final_result .= "<div class=\"details\">" . $rows['date'] . "</div>"; $final_result .= "</div></div>"; $final_result .= $this->get_comments_for_feed($post_id); $final_result .= "</div>"; echo $final_result; } } public function get_comments_for_feed($feed_id){ global $User; $sql = "SELECT feeds.comments FROM feeds WHERE feeds.id = " . $feed_id . " LIMIT 1"; $result = mysql_query($sql); $result = mysql_fetch_array($result); $result = $result['comments']; $comment_list = get_array_string_from_feild($result); $sql = "SELECT comment_feed.user_id, comment_feed.comment, comment_feed.date FROM comment_feed "; $sql .= "WHERE comment_feed.id IN(" . $comment_list . ") ORDER BY comment_feed.date DESC"; $result = mysql_query($sql); if(empty($result)){ return "";} else { while ($rows = mysql_fetch_array($result)) { $id = $rows['user_id']; $dp = $User->user_detail_by_id($id, "dp"); $user_name = $User->full_name_by_id($id); return "<div class=\"comments\"><div class=\"imageHolder\"><img src=\"". $dp ."\" class=\"profilePicture\" /></div> <div class=\"txtHolder\"> <div class=\"username\"> " . $user_name . "</div> <div class=\"userfeed\"> " . $rows['comment'] . "</div> <div class=\"details\">" . $rows['date'] . "</div> </div></div>"; } } }

    Read the article

  • Error 324 (net::ERR_EMPTY_RESPONSE): Unknown error.

    - by Kp
    I get the following error in Chrome every time I try to run my script on a Linux server: Error 324 (net::ERR_EMPTY_RESPONSE): Unknown error. In Firefox it just shows a blank white page. Whenever I run it on my local test server (IIS on Windows 7) it runs exactly the way it should with no errors. I am pretty sure that it is a problem with the imap_open function. error_reporting(E_ALL); echo "test"; // enter gmail username below e.g.-- $m_username = "yourusername"; $m_username = "username"; // enter gmail password below e.g.-- $m_password = "yourpword"; $m_password = "password"; // Enter the mail server to connect to $server = '{imap.gmail.com:993/imap/ssl/novalidate-cert}INBOX'; // enter the number of unread messages you want to display from mailbox or //enter 0 to display all unread messages e.g.-- $m_acs = 0; $m_acs = 10; // How far back in time do you want to search for unread messages - one month = 0 , two weeks = 1, one week = 2, three days = 3, // one day = 4, six hours = 5 or one hour = 6 e.g.-- $m_t = 6; $m_t = 2; //-----------Nothing More to edit below //open mailbox $m_mail = imap_open ($server, $m_username . "@gmail.com", $m_password) // or throw an error or die("ERROR: " . imap_last_error()); // unix time gone by $m_gunixtp = array(2592000, 1209600, 604800, 259200, 86400, 21600, 3600); // Date to start search $m_gdmy = date('d-M-Y', time() - $m_gunixtp[$m_t]); //search mailbox for unread messages since $m_t date $m_search=imap_search ($m_mail, 'ALL'); // Order results starting from newest message rsort($m_search); //if m_acs 0 then limit results if($m_acs 0){ array_splice($m_search, $m_acs); } $read = $_GET[read]; if ($read) { function get_mime_type(&$structure) { $primary_mime_type = array("TEXT", "MULTIPART","MESSAGE", "APPLICATION", "AUDIO","IMAGE", "VIDEO", "OTHER"); if($structure-subtype) { return $primary_mime_type[(int) $structure-type] . '/' .$structure-subtype; } return "TEXT/PLAIN"; } function get_part($stream, $msg_number, $mime_type, $structure = false,$part_number = false) { if(!$structure) { $structure = imap_fetchstructure($stream, $msg_number); } if($structure) { if($mime_type == get_mime_type($structure)) { if(!$part_number) { $part_number = "1"; } $text = imap_fetchbody($stream, $msg_number, $part_number); if($structure->encoding == 3) { return imap_base64($text); } else if($structure->encoding == 4) { return imap_qprint($text); } else { return $text; } } if($structure->type == 1) /* multipart */ { while(list($index, $sub_structure) = each($structure->parts)) { if($part_number) { $prefix = $part_number . '.'; } $data = get_part($stream, $msg_number, $mime_type, $sub_structure,$prefix . ($index + 1)); if($data) { return $data; } } // END OF WHILE } // END OF MULTIPART } // END OF STRUTURE return false; } // END OF FUNCTION // GET TEXT BODY $dataTxt = get_part($m_mail, $read, "TEXT/PLAIN"); // GET HTML BODY $dataHtml = get_part($m_mail, $read, "TEXT/HTML"); if ($dataHtml != "") { $msgBody = $dataHtml; $mailformat = "html"; } else { $msgBody = ereg_replace("\n","",$dataTxt); $mailformat = "text"; } if ($mailformat == "text") { echo "<html><head><title>Messagebody</title></head><body bgcolor=\"white\">$msgBody</body></html>"; } else { echo $msgBody; // It contains all HTML HEADER tags so we don't have to make them. } exit; } //loop it foreach ($m_search as $what_ever) { //get imap header info for obj thang $obj_thang = imap_headerinfo($m_mail, $what_ever); //get body info for obj thang $obj_thangs = imap_body($m_mail, $what_ever); //Then spit it out below.........if you dont swallow echo "Message ID# " . $what_ever . " Date: " . date("F j, Y, g:i a", $obj_thang-udate) . " From: " . $obj_thang-fromaddress . " To: " . $obj_thang-toaddress . " Subject: " . $obj_thang-Subject . " "; } echo "" . $m_empty . ""; //close mailbox imap_close($m_mail); ?

    Read the article

  • Transaction issue in java with hibernate - latest entries not pulled from database

    - by Gearóid
    Hi, I'm having what seems to be a transactional issue in my application. I'm using Java 1.6 and Hibernate 3.2.5. My application runs a monthly process where it creates billing entries for a every user in the database based on their monthly activity. These billing entries are then used to create Monthly Bill object. The process is: Get users who have activity in the past month Create the relevant billing entries for each user Get the set of billing entries that we've just created Create a Monthly Bill based on these entries Everything works fine until Step 3 above. The Billing Entries are correctly created (I can see them in the database if I add a breakpoint after the Billing Entry creation method), but they are not pulled out of the database. As a result, an incorrect Monthly Bill is generated. If I run the code again (without clearing out the database), new Billing Entries are created and Step 3 pulls out the entries created in the first run (but not the second run). This, to me, is very confusing. My code looks like the following: for (User user : usersWithActivities) { createBillingEntriesForUser(user.getId()); userBillingEntries = getLastMonthsBillingEntriesForUser(user.getId()); createXMLBillForUser(user.getId(), userBillingEntries); } The methods called look like the following: @Transactional public void createBillingEntriesForUser(Long id) { UserManager userManager = ManagerFactory.getUserManager(); User user = userManager.getUser(id); List<AccountEvent> events = getLastMonthsAccountEventsForUser(id); BillingEntry entry = new BillingEntry(); if (null != events) { for (AccountEvent event : events) { if (event.getEventType().equals(EventType.ENABLE)) { Calendar cal = Calendar.getInstance(); Date eventDate = event.getTimestamp(); cal.setTime(eventDate); double startDate = cal.get(Calendar.DATE); double numOfDaysInMonth = cal.getActualMaximum(Calendar.DAY_OF_MONTH); double numberOfDaysInUse = numOfDaysInMonth - startDate; double fractionToCharge = numberOfDaysInUse/numOfDaysInMonth; BigDecimal amount = BigDecimal.valueOf(fractionToCharge * Prices.MONTHLY_COST); amount.scale(); entry.setAmount(amount); entry.setUser(user); entry.setTimestamp(eventDate); userManager.saveOrUpdate(entry); } } } } @Transactional public Collection<BillingEntry> getLastMonthsBillingEntriesForUser(Long id) { if (log.isDebugEnabled()) log.debug("Getting all the billing entries for last month for user with ID " + id); //String queryString = "select billingEntry from BillingEntry as billingEntry where billingEntry>=:firstOfLastMonth and billingEntry.timestamp<:firstOfCurrentMonth and billingEntry.user=:user"; String queryString = "select be from BillingEntry as be join be.user as user where user.id=:id and be.timestamp>=:firstOfLastMonth and be.timestamp<:firstOfCurrentMonth"; //This parameter will be the start of the last month ie. start of billing cycle SearchParameter firstOfLastMonth = new SearchParameter(); firstOfLastMonth.setTemporalType(TemporalType.DATE); //this parameter holds the start of the CURRENT month - ie. end of billing cycle SearchParameter firstOfCurrentMonth = new SearchParameter(); firstOfCurrentMonth.setTemporalType(TemporalType.DATE); Query query = super.entityManager.createQuery(queryString); query.setParameter("firstOfCurrentMonth", getFirstOfCurrentMonth()); query.setParameter("firstOfLastMonth", getFirstOfLastMonth()); query.setParameter("id", id); List<BillingEntry> entries = query.getResultList(); return entries; } public MonthlyBill createXMLBillForUser(Long id, Collection<BillingEntry> billingEntries) { BillingHistoryManager manager = ManagerFactory.getBillingHistoryManager(); UserManager userManager = ManagerFactory.getUserManager(); MonthlyBill mb = new MonthlyBill(); User user = userManager.getUser(id); mb.setUser(user); mb.setTimestamp(new Date()); Set<BillingEntry> entries = new HashSet<BillingEntry>(); entries.addAll(billingEntries); String xml = createXmlForMonthlyBill(user, entries); mb.setXmlBill(xml); mb.setBillingEntries(entries); MonthlyBill bill = (MonthlyBill) manager.saveOrUpdate(mb); return bill; } Help with this issue would be greatly appreciated as its been wracking my brain for weeks now! Thanks in advance, Gearoid.

    Read the article

  • I have this broken php upload script

    - by Anders Kitson
    I have this script for uploading a image and content from a form, it works in one project but not the other. I have spent a good few hours trying to debug it, I am hoping someone could point out the issue I might be having. Where there are comments is where I have tried to debug. The first error I got was the "echo invalid file" at the beginning of the last comment. With these specific areas commented out the upload name and type that I am supposed to be grabbing from the form is not being echoed, I am thinking this is where the error is occurring, but can't quite seem to find it. Thanks. <?php include("../includes/connect.php"); /* if ((($_FILES["file"]["type"] == "image/gif") || ($_FILES["file"]["type"] == "image/jpeg") || ($_FILES["file"]["type"] == "image/pjpeg")) && ($_FILES["file"]["size"] < 2000000)) { */ if ($_FILES["file"]["error"] > 0) { echo "Return Code: " . $_FILES["file"]["error"] . "<br />"; } else { echo "Upload: " . $_FILES["file"]["name"] . "<br />"; echo "Type: " . $_FILES["file"]["type"] . "<br />"; echo "Size: " . ($_FILES["file"]["size"] / 1024) . " Kb<br />"; echo "Temp file: " . $_FILES["file"]["tmp_name"] . "<br />"; /* GRAB FORM DATA */ $title = $_POST['title']; $date = $_POST['date']; $content = $_POST['content']; $imageName1 = $_FILES["file"]["name"]; echo $title; echo "<br/>"; echo $date; echo "<br/>"; echo $content; echo "<br/>"; echo $imageName1; $sql = "INSERT INTO blog (title,date,content,image)VALUES( \"$title\", \"$date\", \"$content\", \"$imageName1\" )"; $results = mysql_query($sql)or die(mysql_error()); echo "<br/>"; if (file_exists("../images/blog/" . $_FILES["file"]["name"])) { echo $_FILES["file"]["name"] . " already exists. "; } else { move_uploaded_file($_FILES["file"]["tmp_name"], "../images/blog/" . $_FILES["file"]["name"]); echo "Stored in: " . "../images/blog/" . $_FILES["file"]["name"]; } } /* } else { echo "Invalid file" . "<br/>"; echo "Type: " . $_FILES["file"]["type"] . "<br />"; } */ //lets create a thumbnail of this uploaded image. /* $fileName = $_FILES["file"]["name"]; createThumb($fileName,310,"../images/blog/thumbs/"); function createThumb($thisFileName, $thisThumbWidth, $thisThumbDest){ $thisOriginalFilePath = "../images/blog/". $thisFileName; list($width, $height) = getimagesize($thisOriginalFilePath); $imgRatio =$width/$height; $thisThumbHeight = $thisThumbWidth/$imgRatio; $thumb = imagecreatetruecolor($thisThumbWidth,$thisThumbHeight); $source = imagecreatefromjpeg($thisOriginalFilePath); imagecopyresampled($thumb, $source, 0, 0, 0, 0, $thisThumbWidth,$thisThumbHeight, $width, $height); $newFileName = $thisThumbDest.$thisFileName; imagejpeg($thumb,$newFileName, 80); echo "<p><img src=\"$newFileName\" /></p>"; //header("location: http://www.google.ca"); } */ ?>

    Read the article

  • JPA 2?EJB 3.1?JSF 2????????! WebLogic Server 12c?????????Java EE 6??????|WebLogic Channel|??????

    - by ???02
    2012?2???????????????WebLogic Server 12c?????????Java EE 6?????????????????????????????????????????????????????????????Oracle Enterprise Pack for Eclipse 12c??WebLogic Server 12c(???)????Java EE 6??????3??????????????????????????????JPA 2.0??????????·?????????EJB 3.1???????·???????????????(???)???????O/R?????????????JPA 2.0 Java EE 6????????????????????Web?????????????3?????(3????)???????·????????????·????????????????????????????????JPA(Java Persistence API) 2.0???EJB(Enterprise JavaBeans) 3.1???JSF(JavaServer Faces) 2.0????3????????????????·???????????JPA??Java??????????????·?????????????O/R?????????????????????·???????????EJB?Session Bean??????????????????·??????????????????????JSF??????????????????????????????????????? ??????JPA????Oracle Database??EMPLOYEES?????Java??????????????????????Entity Bean??????XML?????????????????????????XML????????????????????????????????????????????????????·?????????????????????????????????????????????????????????????Java EE 6??????JPA 2.0??????????·???????????????????????????????????????????????????????????????????????????????????????? ?????????????????????????????????????????????????????????????Oracle Enterprise Pack for Eclipse(OEPE)??????File????????New?-?Other??????? ??????New??????????????????????????Web?-?Dynamic Web Project???????Next????????????????Dynamic Web Project?????????????Project name????OOW???????????Target Runtime????New Runtime????????? ???New Server Runtime Environment???????????????Oracle?-?Oracle WebLogic Server 12c(12.1.1)???????Next???????????????????????????WebLogic home????C:\Oracle\Middleware\wlserver_12.1???????Finish?????????????WebLogic Home????????????????????????Java home?????????????????????Finish??????????????????????Dynamic Web Project????????????????Finish??????????????????JPA 2.0??????????·?????? ???????????????JPA 2.0???????????????·??????????????????Eclipse??Project Explorer?(??????·???)?????????OOW?????????????????????????????·???????????????Properties?????????????????·???·????????????????????????????Project Facets?????????????JPA??????(?????????????Details?????JPA 2.0?????????????????????)???????????????????Further configuration available????????? ???Modify Faceted Project??????????????????????????????????Connection????????????????????????????Add Connection????????? ??????New Connection Profile????????????????Connection Profile Type????Oracle Database Connection??????Next???????????? ???Specify a Driver and Connection Details???????Drivers????Oracle Database 10g Driver Default???????????Properties?????????????????????SIDxeHostlocalhostPort number1521User nameHRPasswordhr ???????????Test Connection??????????????????Ping Succeeded!?????????????????????????????Finish???????????Modify Faceted Project????????OK????????????????Properties for OOW????????OK?????????????????? ?????????Eclipse????????????????OOW?????????????????·???????????????JPA Tools?-?Generate Entities from Tables...??????? ????Generate Custom Entities???????????????????????????????Schema????HR??????Tables????EMPLOYEES???????????Next???????????? ???????????Next???????????Customize Default Entity Generation??????Package????model???????Finish?????????????JPQL?????????? ?????????Oracle Database??EMPLOYEES??????????????????·????model.Employee.java?????????????????????????????????·?????OOW????Java Resources?-?src?-?model???????Employee.java????????????????????????????????·???Employee????(Employee.java)?package model; import java.io.Serializable; import java.math.BigDecimal; import java.util.Date; import java.util.Set; import javax.persistence.Column;<...?...>/**  * The persistent class for the EMPLOYEES database table.  *  */ @Entity  // ?@Table(name="EMPLOYEES")  // ?// Apublic class Employee implements Serializable {        private static final long serialVersionUID = 1L;       @Id  // ?       @Column(name="EMPLOYEE_ID")        private long employeeId;        @Column(name="COMMISSION_PCT")        private BigDecimal commissionPct;        @Column(name="DEPARTMENT_ID")        private BigDecimal departmentId;        private String email;        @Column(name="FIRST_NAME")        private String firstName;       @Temporal( TemporalType.DATE)  //?       @Column(name="HIRE_DATE")        private Date hireDate;        @Column(name="JOB_ID")        private String jobId;        @Column(name="LAST_NAME")        private String lastName;        @Column(name="PHONE_NUMBER")        private String phoneNumber;        private BigDecimal salary;        //bi-directional many-to-one association to Employee<...?...>}  ???????????????·???????????????????????????????????????????@Table(name="")??????@Table??????????????????????????????????????? ?????????????????????????????????????·???????????????? ?????????????????????????????SQL?Data?????????? ???????????????A?????JPA?????????JPQL(Java Persistence Query Language)?????????????JPQL?????SQL???????????????????????????????????????????????????????????????????????????????????Employee.selectByNameEmployee??firstName????????????????????employeeId????????? ?????????????????????import java.util.Date;import java.util.Set;import javax.persistence.Column;<...?...>/**  * The persistent class for the EMPLOYEES database table.  *  */ @Entity  // ?@Table(name="EMPLOYEES")  // ?@NamedQueries({       @NamedQuery(name="Employee.selectByName" , query="select e from Employee e where e.firstName like :name order by e.employeeId")})<...?...> ?????????·??????OOW?-?JPA Content?-?persistent.xml??????Connection???????????????Database????JTA data source:???jdbc/test????????????????????????Java EE 6??????JPA 2.0???????????????????????????????????·??????????????????????????????????????SQL????????????????????????·????????????·??????????????XML??????????????????1??????????????????????????????????????????????????????????????????EJB 3.1????????·???????????EJB 3.1????????·?????????????????EJB 3.1?Stateless Session Bean?????·????????????????·???????????????????·??????????????????? EJB3.1?????JPA 2.0???????????·???????????????????????XML???????????????????????????????EJB 3.1?????????·????EJB?????????????????????????????????????????????????????????????? ????????EJB 3.1?Session Bean?????·????????????????????????????????????????????????????public List<Employee> getEmp(String keyword)firstName????????????Employee?????? ????????????????????·???????????OOW????????????·???????????????New?-?Other???????????????????????????????????EJB?-?Session Bean(EJB 3.x)??????NEXT????????????????????Create EJB 3.x Session Bean?????????????Java Package????ejb???class name????EmpLogic???????????State Type????Stateless?????????No-interface???????????????????????Finish???????????? ?????????Stateless Session Bean??????·?????EmpLogic.java????????????????????EmpLogic????·????????EJB?????????????Stateless Session Bean?????????@Stateless?????????????????????????????????????EmpLogic????(EmpLogic.java)?package ejb;import javax.ejb.LocalBean;import javax.ejb.Stateless;<...?...>import model.Employee;@Stateless@LocalBeanpublic class EmpLogic {       public EmpLogic() {       }} ??????????????????????????????????????·???????????????????????import??????????????????EmpLogic??????????????????????????·???????????????????????import????????(EmpLogic.java)?package ejb;import javax.ejb.LocalBean;import javax.ejb.Stateless;import javax.persistence.EntityManager;  // ?import javax.persistence.PersistenceContext;  // ?<...?...>import model.Employee;@Stateless@LocalBeanpublic class EmpLogic {      @PersistenceContext(unitName = "OOW")  // ?      private EntityManager em;  // ?       public EmpLogic() {       }} ?????????·???????JPA???????????????????·????????????????????????????CRUD???????????????????·????????????EntityManager???????????????????????????1????????????????·???????????????????????@PersistenceContext?????unitName?????????????persistence.xml????persistence-unit???name?????????????? ???????EmpLogic?????·???????????????????????????????????????????????????????????????????????????????EmpLogic????????·???????(EmpLogic.java)?package ejb;import java.util.List;  // ? import javax.ejb.LocalBean;import javax.ejb.Stateless;import javax.persistence.EntityManager;  // ? import javax.persistence.PersistenceContext;  // ? <...?...>import model.Employee;@Stateless@LocalBeanpublic class EmpLogic {       @PersistenceContext(unitName = "OOW")  // ?        private EntityManager em;  // ?        public EmpLogic() {       }      @SuppressWarnings("unchecked")  // ?      public List<Employee> getEmp(String keyword) {  // ?             StringBuilder param = new StringBuilder();  // ?             param.append("%");  // ?             param.append(keyword);  // ?             param.append("%");  // ?             return em.createNamedQuery("Employee.selectByName")  // ?                    .setParameter("name", param.toString()).getResultList();  // ?      }} ???EJB 3.1???Stateless Session Bean?????????? ???JSF 2.0???????????????????????????????????????????????????JAX-RS????RESTful?Web??????????????????????

    Read the article

  • ANY way to consolidate this code?

    - by JM4
    I am building a PHP registration form which takes the following fields for up to 20 athletes: First Name Middle Initial Last Name Federation Number Address City State Zip DOB SSN Phone Email I am only through 7 of the fields for each fighter and my php file is very large (over 40kb). Is there ANY way to consolidate this code at all? I am also having to validate the information on each field (as I said - 20 athletes x 12 fields = 240 validations on a single page). If I can send any further code let me know! <form id="Form" action="<?php $_SERVER['PHP_SELF']; ?>" method="post" name="Form" onsubmit="return Enroll_Form_Validator(this)"> <p class="title">Your Fighters' Information</p> <p>Please complete the following fields with your <span style="color:red;"> Fighters' Information</span> to continue your enrollment.</p> <br /> <?php // if $errors is not empty, the form must have failed one or more validation // tests. Loop through each and display them on the page for the user if (!empty($errors)) { echo "<div class='error'>Please fix the following errors:\n<ul>"; foreach ($errors as $error) echo "<li>$error</li>\n"; echo "</ul></div>"; } ?> <?php if ($_SESSION['Num_Fighters'] > "0") { ?> <table class="demoTable"> <tr> <td>First Name: </td> <td><input type="text" name="F1FirstName" onkeyup="if(!this.value.match(/^([a-z]+\s?)*$/i))this.value=this.value.replace(/[^a-z ]/ig,'').replace(/\s+/g,' ')" value="<?php echo $fields['F1FirstName']; ?>" /></td> </tr> <tr> <td>Middle Initial: </td> <td><input type="text" name="F1MI" size="2" maxlength="1" onkeyup="if(!this.value.match(/^([a-z]+\s?)*$/i))this.value=this.value.replace(/[^a-z ]/ig,'').replace(/\s+/g,' ')" value="<?php echo $fields['F1MI']; ?>" /></td> </tr> <tr> <td>Last Name: </td> <td><input type="text" name="F1LastName" onkeyup="if(!this.value.match(/^([a-z]+\s?)*$/i))this.value=this.value.replace(/[^a-z ]/ig,'').replace(/\s+/g,' ')" value="<?php echo $fields['F1LastName']; ?>" /></td> </tr> <tr> <td>Federation No: </td> <td><input type="text" name="F1FedNum" maxlength="10" onkeyup="this.value=this.value.replace(/[^0-9]/ig, '')" value="<?php echo $fields['F1FedNum']; ?>" /></td> </tr> <tr> <td>SSN: </td> <td><input type="text" name="F1SSN1" size="3" maxlength="3" onkeyup="this.value=this.value.replace(/[^0-9]/ig, '')" value="<?php echo $fields['F1SSN1']; ?>" /> - <input type="text" name="F1SSN2" size="2" maxlength="2" onkeyup="this.value=this.value.replace(/[^0-9]/ig, '')" value="<?php echo $fields['F1SSN2']; ?>" /> - <input type="text" name="F1SSN3" size="4" maxlength="4" onkeyup="this.value=this.value.replace(/[^0-9]/ig, '')" value="<?php echo $fields['F1SSN3']; ?>" /> </td> </tr> <tr> <td>Date of Birth</td> <td> <select name="F1DOB1"> <option value="">Month</option> <?php for ($i=1; $i<=12; $i++) { echo "<option value='$i'"; if ($fields["F1DOB1"] == $i) echo " selected"; echo ">$i</option>"; } ?> </select> / <select name="F1DOB2"> <option value="">Day</option> <?php for ($i=1; $i<=31; $i++) { echo "<option value='$i'"; if ($fields["F1DOB2"] == $i) echo " selected"; echo ">$i</option>"; } ?> </select> / <select name="F1DOB3"> <option value="">Year</option> <?php for ($i=date('Y'); $i>=1900; $i--) { echo "<option value='$i'"; if ($fields["F1DOB3"] == $i) echo " selected"; echo ">$i</option>"; } ?> </select> </td> </tr> <tr> <td>Address: </td> <td><input type="text" name="F1Address" value="<?php echo $fields['F1Address']; ?>" /></td> </tr> <tr> <td>City: </td> <td><input type="text" name="F1City" onkeyup="if(!this.value.match(/^([a-z]+\s?)*$/i))this.value=this.value.replace(/[^a-z ]/ig,'').replace(/\s+/g,' ')" value="<?php echo $fields['F1City']; ?>" /></td> </tr> <tr> <td>State: </td> <td><select name="F1State"><option value="">Choose a State</option><?php showOptionsDrop($states_arr, null, true); ?></select></td> </tr> <tr> <td>Zip Code: </td> <td><input type="text" name="F1Zip" size="6" maxlength="5" onkeyup="this.value=this.value.replace(/[^0-9]/ig, '')" value="<?php echo $fields['F1Zip']; ?>" /></td> </tr> <tr> <td>Contact Telephone No: </td> <td>( <input type="text" name="F1Phone1" size="3" maxlength="3" onkeyup="this.value=this.value.replace(/[^0-9]/ig, '')" value="<?php echo $fields['F1Phone1']; ?>" /> ) <input type="text" name="F1Phone2" size="3" maxlength="3" onkeyup="this.value=this.value.replace(/[^0-9]/ig, '')" value="<?php echo $fields['F1Phone2']; ?>" /> - <input type="text" name="F1Phone3" size="4" maxlength="4" onkeyup="this.value=this.value.replace(/[^0-9]/ig, '')" value="<?php echo $fields['F1Phone3']; ?>" /> </td> </tr> <tr> <td>Email:</td> <td><input type="text" name="F1Email" value="<?php echo $fields['F1Email']; ?>" /></td> </tr> </table> <?php } ?> <br /> <?php if ($_SESSION['Num_Fighters'] > "1") { ?> <table class="demoTable"> <tr> <td>First Name: </td> <td><input type="text" name="F2FirstName" onkeyup="if(!this.value.match(/^([a-z]+\s?)*$/i))this.value=this.value.replace(/[^a-z ]/ig,'').replace(/\s+/g,' ')" value="<?php echo $fields['F2FirstName']; ?>" /></td> </tr> <tr> <td>Middle Initial: </td> <td><input type="text" name="F2MI" size="2" maxlength="1" onkeyup="if(!this.value.match(/^([a-z]+\s?)*$/i))this.value=this.value.replace(/[^a-z ]/ig,'').replace(/\s+/g,' ')" value="<?php echo $fields['F2MI']; ?>" /></td> </tr> <tr> <td>Last Name: </td> <td><input type="text" name="F2LastName" onkeyup="if(!this.value.match(/^([a-z]+\s?)*$/i))this.value=this.value.replace(/[^a-z ]/ig,'').replace(/\s+/g,' ')" value="<?php echo $fields['F2LastName']; ?>" /></td> </tr> <tr> <td>Federation No: </td> <td><input type="text" name="F2FedNum" maxlength="10" onkeyup="this.value=this.value.replace(/[^0-9]/ig, '')" value="<?php echo $fields['F2FedNum']; ?>" /></td> </tr> <tr> <td>SSN: </td> <td><input type="text" name="F2SSN1" size="3" maxlength="3" onkeyup="this.value=this.value.replace(/[^0-9]/ig, '')" value="<?php echo $fields['F2SSN1']; ?>" /> - <input type="text" name="F2SSN2" size="2" maxlength="2" onkeyup="this.value=this.value.replace(/[^0-9]/ig, '')" value="<?php echo $fields['F2SSN2']; ?>" /> - <input type="text" name="F2SSN3" size="4" maxlength="4" onkeyup="this.value=this.value.replace(/[^0-9]/ig, '')" value="<?php echo $fields['F2SSN3']; ?>" /> </td> </tr> <tr> <td>Date of Birth</td> <td> <select name="F2DOB1"> <option value="">Month</option> <?php for ($i=1; $i<=12; $i++) { echo "<option value='$i'"; if ($fields["F2DOB1"] == $i) echo " selected"; echo ">$i</option>"; } ?> </select> / <select name="F2DOB2"> <option value="">Day</option> <?php for ($i=1; $i<=31; $i++) { echo "<option value='$i'"; if ($fields["F2DOB2"] == $i) echo " selected"; echo ">$i</option>"; } ?> </select> / <select name="F2DOB3"> <option value="">Year</option> <?php for ($i=date('Y'); $i>=1900; $i--) { echo "<option value='$i'"; if ($fields["F2DOB3"] == $i) echo " selected"; echo ">$i</option>"; } ?> </select> </td> </tr> <tr> <td>Address: </td> <td><input type="text" name="F2Address" value="<?php echo $fields['F2Address']; ?>" /></td> </tr> <tr> <td>City: </td> <td><input type="text" name="F2City" onkeyup="if(!this.value.match(/^([a-z]+\s?)*$/i))this.value=this.value.replace(/[^a-z ]/ig,'').replace(/\s+/g,' ')" value="<?php echo $fields['F2City']; ?>" /></td> </tr> <tr> <td>State: </td> <td><select name="F2State"><option value="">Choose a State</option><?php showOptionsDrop($states_arr, null, true); ?></select></td> </tr> <tr> <td>Zip Code: </td> <td><input type="text" name="F2Zip" size="6" maxlength="5" onkeyup="this.value=this.value.replace(/[^0-9]/ig, '')" value="<?php echo $fields['F2Zip']; ?>" /></td> </tr> <tr> <td>Contact Telephone No: </td> <td>( <input type="text" name="F2Phone1" size="3" maxlength="3" onkeyup="this.value=this.value.replace(/[^0-9]/ig, '')" value="<?php echo $fields['F2Phone1']; ?>" /> ) <input type="text" name="F2Phone2" size="3" maxlength="3" onkeyup="this.value=this.value.replace(/[^0-9]/ig, '')" value="<?php echo $fields['F2Phone2']; ?>" /> - <input type="text" name="F2Phone3" size="4" maxlength="4" onkeyup="this.value=this.value.replace(/[^0-9]/ig, '')" value="<?php echo $fields['F2Phone3']; ?>" /> </td> </tr> <tr> <td>Email:</td> <td><input type="text" name="F2Email" value="<?php echo $fields['F2Email']; ?>" /></td> </tr> </table> <?php } ?> <br /> <?php if ($_SESSION['Num_Fighters'] > "2") { ?> <table class="demoTable"> <tr> <td>First Name: </td> <td><input type="text" name="F3FirstName" onkeyup="if(!this.value.match(/^([a-z]+\s?)*$/i))this.value=this.value.replace(/[^a-z ]/ig,'').replace(/\s+/g,' ')" value="<?php echo $fields['F3FirstName']; ?>" /></td> </tr> <tr> <td>Middle Initial: </td> <td><input type="text" name="F3MI" size="2" maxlength="1" onkeyup="if(!this.value.match(/^([a-z]+\s?)*$/i))this.value=this.value.replace(/[^a-z ]/ig,'').replace(/\s+/g,' ')" value="<?php echo $fields['F3MI']; ?>" /></td> </tr> <tr> <td>Last Name: </td> <td><input type="text" name="F3LastName" onkeyup="if(!this.value.match(/^([a-z]+\s?)*$/i))this.value=this.value.replace(/[^a-z ]/ig,'').replace(/\s+/g,' ')" value="<?php echo $fields['F3LastName']; ?>" /></td> </tr> <tr> <td>Federation No: </td> <td><input type="text" name="F3FedNum" maxlength="10" onkeyup="this.value=this.value.replace(/[^0-9]/ig, '')" value="<?php echo $fields['F3FedNum']; ?>" /></td> </tr> <tr> <td>SSN: </td> <td><input type="text" name="F3SSN1" size="3" maxlength="3" onkeyup="this.value=this.value.replace(/[^0-9]/ig, '')" value="<?php echo $fields['F3SSN1']; ?>" /> - <input type="text" name="F3SSN2" size="2" maxlength="2" onkeyup="this.value=this.value.replace(/[^0-9]/ig, '')" value="<?php echo $fields['F3SSN2']; ?>" /> - <input type="text" name="F3SSN3" size="4" maxlength="4" onkeyup="this.value=this.value.replace(/[^0-9]/ig, '')" value="<?php echo $fields['F3SSN3']; ?>" /> </td> </tr> <tr> <td>Date of Birth</td> <td> <select name="F3DOB1"> <option value="">Month</option> <?php for ($i=1; $i<=12; $i++) { echo "<option value='$i'"; if ($fields["F3DOB1"] == $i) echo " selected"; echo ">$i</option>"; } ?> </select> / <select name="F3DOB2"> <option value="">Day</option> <?php for ($i=1; $i<=31; $i++) { echo "<option value='$i'"; if ($fields["F3DOB2"] == $i) echo " selected"; echo ">$i</option>"; } ?> </select> / <select name="F3DOB3"> <option value="">Year</option> <?php for ($i=date('Y'); $i>=1900; $i--) { echo "<option value='$i'"; if ($fields["F3DOB3"] == $i) echo " selected"; echo ">$i</option>"; } ?> </select> </td> </tr> <tr> <td>Address: </td> <td><input type="text" name="F3Address" value="<?php echo $fields['F3Address']; ?>" /></td> </tr> <tr> <td>City: </td> <td><input type="text" name="F3City" onkeyup="if(!this.value.match(/^([a-z]+\s?)*$/i))this.value=this.value.replace(/[^a-z ]/ig,'').replace(/\s+/g,' ')" value="<?php echo $fields['F3City']; ?>" /></td> </tr> <tr> <td>State: </td> <td><select name="F3State"><option value="">Choose a State</option><?php showOptionsDrop($states_arr, null, true); ?></select></td> </tr> <tr> <td>Zip Code: </td> <td><input type="text" name="F3Zip" size="6" maxlength="5" onkeyup="this.value=this.value.replace(/[^0-9]/ig, '')" value="<?php echo $fields['F3Zip']; ?>" /></td> </tr> <tr> <td>Contact Telephone No: </td> <td>( <input type="text" name="F3Phone1" size="3" maxlength="3" onkeyup="this.value=this.value.replace(/[^0-9]/ig, '')" value="<?php echo $fields['F3Phone1']; ?>" /> ) <input type="text" name="F3Phone2" size="3" maxlength="3" onkeyup="this.value=this.value.replace(/[^0-9]/ig, '')" value="<?php echo $fields['F3Phone2']; ?>" /> - <input type="text" name="F3Phone3" size="4" maxlength="4" onkeyup="this.value=this.value.replace(/[^0-9]/ig, '')" value="<?php echo $fields['F3Phone3']; ?>" /> </td> </tr> <tr> <td>Email:</td> <td><input type="text" name="F3Email" value="<?php echo $fields['F3Email']; ?>" /></td> </tr> </table> <?php } ?> <br /> <?php if ($_SESSION['Num_Fighters'] > "3") { ?> <table class="demoTable"> <tr> <td>First Name: </td> <td><input type="text" name="F4FirstName" onkeyup="if(!this.value.match(/^([a-z]+\s?)*$/i))this.value=this.value.replace(/[^a-z ]/ig,'').replace(/\s+/g,' ')" value="<?php echo $fields['F4FirstName']; ?>" /></td> </tr> <tr> <td>Middle Initial: </td> <td><input type="text" name="F4MI" size="2" maxlength="1" onkeyup="if(!this.value.match(/^([a-z]+\s?)*$/i))this.value=this.value.replace(/[^a-z ]/ig,'').replace(/\s+/g,' ')" value="<?php echo $fields['F4MI']; ?>" /></td> </tr> <tr> <td>Last Name: </td> <td><input type="text" name="F4LastName" onkeyup="if(!this.value.match(/^([a-z]+\s?)*$/i))this.value=this.value.replace(/[^a-z ]/ig,'').replace(/\s+/g,' ')" value="<?php echo $fields['F4LastName']; ?>" /></td> </tr> <tr> <td>Federation No: </td> <td><input type="text" name="F4FedNum" maxlength="10" onkeyup="this.value=this.value.replace(/[^0-9]/ig, '')" value="<?php echo $fields['F4FedNum']; ?>" /></td> </tr> <tr> <td>SSN: </td> <td><input type="text" name="F4SSN1" size="3" maxlength="3" onkeyup="this.value=this.value.replace(/[^0-9]/ig, '')" value="<?php echo $fields['F4SSN1']; ?>" /> - <input type="text" name="F4SSN2" size="2" maxlength="2" onkeyup="this.value=this.value.replace(/[^0-9]/ig, '')" value="<?php echo $fields['F4SSN2']; ?>" /> - <input type="text" name="F4SSN3" size="4" maxlength="4" onkeyup="this.value=this.value.replace(/[^0-9]/ig, '')" value="<?php echo $fields['F4SSN3']; ?>" /> </td> </tr> <tr> <td>Date of Birth</td> <td> <select name="F4DOB1"> <option value="">Month</option> <?php for ($i=1; $i<=12; $i++) { echo "<option value='$i'"; if ($fields["F4DOB1"] == $i) echo " selected"; echo ">$i</option>"; } ?> </select> / <select name="F4DOB2"> <option value="">Day</option> <?php for ($i=1; $i<=31; $i++) { echo "<option value='$i'"; if ($fields["F4DOB2"] == $i) echo " selected"; echo ">$i</option>"; } ?> </select> / <select name="F4DOB3"> <option value="">Year</option> <?php for ($i=date('Y'); $i>=1900; $i--) { echo "<option value='$i'"; if ($fields["F4DOB3"] == $i) echo " selected"; echo ">$i</option>"; } ?> </select> </td> </tr> <tr> <td>Address: </td> <td><input type="text" name="F4Address" value="<?php echo $fields['F4Address']; ?>" /></td> </tr> <tr> <td>City: </td> <td><input type="text" name="F4City" onkeyup="if(!this.value.match(/^([a-z]+\s?)*$/i))this.value=this.value.replace(/[^a-z ]/ig,'').replace(/\s+/g,' ')" value="<?php echo $fields['F4City']; ?>" /></td> </tr> <tr> <td>State: </td> <td><select name="F4State"><option value="">Choose a State</option><?php showOptionsDrop($states_arr, null, true); ?></select></td> </tr> <tr> <td>Zip Code: </td> <td><input type="text" name="F4Zip" size="6" maxlength="5" onkeyup="this.value=this.value.replace(/[^0-9]/ig, '')" value="<?php echo $fields['F4Zip']; ?>" /></td> </tr> <tr> <td>Contact Telephone No: </td> <td>( <input type="text" name="F4Phone1" size="3" maxlength="3" onkeyup="this.value=this.value.replace(/[^0-9]/ig, '')" value="<?php echo $fields['F4Phone1']; ?>" /> ) <input type="text" name="F4Phone2" size="3" maxlength="3" onkeyup="this.value=this.value.replace(/[^0-9]/ig, '')" value="<?php echo $fields['F4Phone2']; ?>" /> - <input type="text" name="F4Phone3" size="4" maxlength="4" onkeyup="this.value=this.value.replace(/[^0-9]/ig, '')" value="<?php echo $fields['F4Phone3']; ?>" /> </td> </tr> <tr> <td>Email:</td> <td><input type="text" name="F4Email" value="<?php echo $fields['F4Email']; ?>" /></td> </tr> </table> <?php } ?> <div align="right"><input class="enrbutton" type="submit" name="submit" value="Continue" /></div> </form> This only goes through 4 athletes and I need it to capture 20. Any ideas? I am forced to keep all 200+ elements in SESSION assuming somebody enrolls 20 athletes.

    Read the article

< Previous Page | 166 167 168 169 170 171 172 173 174 175 176 177  | Next Page >