Search Results

Search found 12909 results on 517 pages for 'clustered index'.

Page 160/517 | < Previous Page | 156 157 158 159 160 161 162 163 164 165 166 167  | Next Page >

  • Nginx alias or rewrite for Horde Groupware ActiveSync URL does not process the rpc.php file

    - by Benny Li
    I'm trying to setup a Horde groupware with Nginx. The webinterface works but I do not get the ActiveSync specific URL to work. The Horde Wiki explains how to use it with an Apache Webserver here. My problem is, that I setup a rewrite (tried an alias too) to serve the location /horde/Microsoft-Server-ActiveSync via the /horde/rpc.php script. But with my current configuration nginx does the rewrite and returns a 200 status code. But it looks like that the php file is not executed. If I go to /horde/rpc.php directly it opens up the login dialog. So this seems to work correct. Firstly I was googling about the problem but could not find a working solution. So now I would like to ask you. The configuration should allow to access the ActiveSync part via the URL /horde/Microsoft-Server-ActiveSync. The horde webinterface is already accessible via /horde. My configuration looks like this: default-ssl.conf server { listen 443 ssl; ssl on; ssl_certificate /opt/nginx/conf/certs/server.crt; ssl_certificate_key /opt/nginx/conf/certs/server.key; server_name example.com; index index.html index.php; root /var/www; include sites-available/horde.conf; } horde.conf location /horde { rewrite_log on; rewrite ^/horde/Microsoft-Server-ActiveSync(.*)$ /horde/rpc.php$1 last; try_files $uri $uri/ /rampage.php?$args; location ~ \.php$ { try_files $uri =404; include sites-available/horde.fcgi-php.conf; } } horde.fcgi-php.conf include fastcgi_params; fastcgi_split_path_info ^(.+\.php)(/.+)$; fastcgi_param PATH_INFO $fastcgi_path_info; fastcgi_param PATH_TRANSLATED $document_root$fastcgi_path_info; fastcgi_pass unix:/var/run/php5-fpm.sock; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_params (default nginx) fastcgi_param QUERY_STRING $query_string; fastcgi_param REQUEST_METHOD $request_method; fastcgi_param CONTENT_TYPE $content_type; fastcgi_param CONTENT_LENGTH $content_length; fastcgi_param SCRIPT_NAME $fastcgi_script_name; fastcgi_param REQUEST_URI $request_uri; fastcgi_param DOCUMENT_URI $document_uri; fastcgi_param DOCUMENT_ROOT $document_root; fastcgi_param SERVER_PROTOCOL $server_protocol; fastcgi_param HTTPS $https if_not_empty; fastcgi_param GATEWAY_INTERFACE CGI/1.1; fastcgi_param SERVER_SOFTWARE nginx/$nginx_version; fastcgi_param REMOTE_ADDR $remote_addr; fastcgi_param REMOTE_PORT $remote_port; fastcgi_param SERVER_ADDR $server_addr; fastcgi_param SERVER_PORT $server_port; fastcgi_param SERVER_NAME $server_name; # PHP only, required if PHP was built with --enable-force-cgi-redirect fastcgi_param REDIRECT_STATUS 200; The nginx log level is set to debug. The output after the request is: 2014/06/13 10:33:15 [notice] 17332#0: *1 "^/horde/Microsoft-Server-ActiveSync(.*)$" matches "/horde/Microsoft-Server-ActiveSync", client: XX.XX.XX.XX, server: example.com, request: "GET /horde/Microsoft-Server-ActiveSync HTTP/1.1", host: "example.com" 2014/06/13 10:33:15 [notice] 17332#0: *1 rewritten data: "/horde/rpc.php", args: "", client: XX.XX.XX.XX, server: example.com, request: "GET /horde/Microsoft-Server-ActiveSync HTTP/1.1", host: "example.com" All this is happening on a RaspberryPi with Raspbian GNU/Linux 7 (which is mainly a Debian Wheezy). So I guess the rewrite works but the php file is not processed?! Does anyone know where the problem is and how to fix it?

    Read the article

  • Causes of sudden massive filesystem damage? ("root inode is not a directory")

    - by poolie
    I have a laptop running Maverick (very happily until yesterday), with a Patriot Torx SSD; LUKS encryption of the whole partition; one lvm physical volume on top of that; then home and root in ext4 logical volumes on top of that. When I tried to boot it yesterday, it complained that it couldn't mount the root filesystem. Running fsck, basically every inode seems to be wrong. Both home and root filesystems show similar problems. Checking a backup superblock doesn't help. e2fsck 1.41.12 (17-May-2010) lithe_root was not cleanly unmounted, check forced. Resize inode not valid. Recreate? no Pass 1: Checking inodes, blocks, and sizes Root inode is not a directory. Clear? no Root inode has dtime set (probably due to old mke2fs). Fix? no Inode 2 is in use, but has dtime set. Fix? no Inode 2 has a extra size (4730) which is invalid Fix? no Inode 2 has compression flag set on filesystem without compression support. Clear? no Inode 2 has INDEX_FL flag set but is not a directory. Clear HTree index? no HTREE directory inode 2 has an invalid root node. Clear HTree index? no Inode 2, i_size is 9581392125871137995, should be 0. Fix? no Inode 2, i_blocks is 40456527802719, should be 0. Fix? no Reserved inode 3 (<The ACL index inode>) has invalid mode. Clear? no Inode 3 has compression flag set on filesystem without compression support. Clear? no Inode 3 has INDEX_FL flag set but is not a directory. Clear HTree index? no .... Running strings across the filesystems, I can see there are what look like filenames and user data there. I do have sufficiently good backups (touch wood) that it's not worth grovelling around to pull back individual files, though I might save an image of the unencrypted disk before I rebuild, just in case. smartctl doesn't show any errors, neither does the kernel log. Running a write-mode badblocks across the swap lv doesn't find problems either. So the disk may be failing, but not in an obvious way. At this point I'm basically, as they say, fscked? Back to reinstalling, perhaps running badblocks over the disk, then restoring from backup? There doesn't even seem to be enough data to file a meaningful bug... I don't recall that this machine crashed last time I used it. At this point I suspect a bug or memory corruption caused it to write garbage across the disks when it was last running, or some kind of subtle failure mode for the SSD. What do you think would have caused this? Is there anything else you'd try?

    Read the article

  • How to delete files and folders that cannot be deleted?

    - by glenneroo
    I have a backup copy of a previous Windows' Documents and Settings folder which only contains my original user and within 2 more directories: Favorites and Local Settings. When I try to delete Local Settings I get this error: When I try to delete Favorites, I get this error: I ran this in a cmd shell: attrib *.* -r -a -s -h /s ...but it did not help, nor did it return any errors/warnings. I used Unlocker v1.8.5 and LockHunter repeatedly at multiple levels to see if any files are in use, but both always say: No Files Locked. Update #1: I was able to rename the directory, which now gives me this warning before (trying to) delete: If I press Yes (or Yes to All) then I get this error: Update #2: I let chkdsk /f run which required a reboot since it's on my primary system partition. During Stage 2 scanning, I received about 40 of these: Deleting an index entry from index $0 of file 25. ...followed by: Deleting index entry cookies in index $I30 of file 37576. ...but I still get the first error dialog above when trying to delete. I ran chkdsk again, this time: chkdsk /f /r. Produced no messages. Same result when deleting. Update #3: Digging deeper, the 99 is the name of one of many directories located deep in here: C:\Documents and Settings.OLD\User\Local Settings\Application Data\Microsoft\Messenger\[email protected]\SharingMetadata\[email protected]\DFSR\Staging\CS{D4E4AE55-B5E2-F03B-5189-6C4DA6E41788}\ Inside each of those directories were files with names such as: 2300-{C93D01AC-0739-4FD9-88C7-13D2F21A208E}-v2300-{C93D01AC-0739-4FD9-88C7-13D2F21A208E}-v2300-Downloaded.frx I noticed that, unlike all the directories, I couldn't rename any of these files. I also noticed that the file + dir names were extremely long: Original directory = 194 characters Filenames = 100+ characters Together the length exceeds the 255-char limit which is bad and would explain the error message I posted in Update #1. Partial Solution: Rename all directories until the total path length is less than 100. Afterwards I was able to rename the .frx files, not to mention delete everything inside the Local Settings directory. This is only a partial solution because these (empty) directories are still not deleteable, C:\1\2\Favorites\Wien\What To Do.. C:\1\2\Favorites\Photography\FIRE Same error as above: Here is what Explorer properties shows for both folders: Update #4 (another partial solution): Using harrymc's answer combined with thoroughly reading through this amazing MS-KB article which contains nearly everyone's idea and then some, inconspicuously titled: You cannot delete a file or a folder on an NTFS file system volume. I was able to delete the 2nd folder C:\1\2\Favorites\Photography\FIRE - the problem being that there was an invisible trailing space at the end. I got lucky when I did an auto-complete whilst playing around with the del "\\?\<path>" command which he suggested. NOTE: A normal del did NOT work, nor did deleting from explorer. Now all that is left is the first directory C:\1\2\Favorites\Wien\What To Do.. (yes I tried endlessly with multiple combinations of the above solution ;) Keep 'em coming! =)

    Read the article

  • Setup Custom Portal & Content Enabled Domain

    - by Stefan Krantz
    When overlooking the past year we have seen a large increase in deployments where only some parts of the WebCenter Suite infrastructure has been used. The most common from my personal perspective is a domain topology that includes: WebCenter Custom Portal, WebCenter Content and Oracle HTTP ServicesToday its very common to see installation where the whole suite is installed when the use case only requires the custom portal and some sub component like WebCenter Content. This post will go into detail on how to minimize the deployment time and effort by only laying down the necessary managed servers needed, by following this proposed method you will minimize the configuration steps and only install the required components and schema's, configure only the necessary components and minimize the impact of architectural changes through reduced dependencies. Assumptions: Oracle 11g Database installed SYS or equivalent access to Database to setup schema's via RCU Running Operating System supporting JDK 7 Update 2 (Check support matrix here) Good understanding of WebLogic Architecture Binaries: Oracle JDK 7 Update 2 (1.7.0_02) (Download) Oracle WebLogic 10.3.6 (Download) Oracle WebCenter Binaries (11.1.1.6) (Download) Oracle WebCenter Content Binaries (11.1.1.6) (Download 1) (Download 2) Oracle HTTP Services (11.1.1.6) (Download) Oracle Repository Creation Utility (11.1.1.6) (Download Linux or Windows) Schema's: Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} MDS - Meta Data Services (WebCenter and OWSM) WebCenter (WebCenter Schema) OCS (Oracle WebCenter Content) Activities (WebCenter Activities) OPSS (Policy Store for WebCenter) Installation Structure: - [Installation Home]/Middleware    - Oracle_WC1 (WebCenter Installation)    - Oracle_WT1 (Oracle WebTier)    - Oracle_ECM (WebCenter Content)    - wlserver_10.3 (Weblogic installation)- [Installation Home]/domains    - webcenter (WebCenter Domain)    - instances (OHS/OPMN instance)- [Installation Home]/applications- [Installation Home]/JDK1.7.0_02 Installation and Configuration Steps: Install Java and configure Java Home Extract the Java Installable (jdk-7u2-linux-x64) to [Installation Home]/JDK1.7.0_02 Add JAVA_HOME to Environment Settings (JAVA_HOME=[Installation Home]/JDK1.7.0_02) Update PATH in Environment Settings (PATH=$JAVA_HOME/bin:$PATH) Install WebLogic Server (Middleware Home) Run the installer / execute jar file (java - jar wls1036_generic.jar) Create the Middleware Home under [Installation Home]/Middleware Install WebCenter Portal (Extend Middleware Home) Extract the compressed file (ofm_wc_generic_11.1.1.6.0_disk1_1of1.zip) to a temp folder Execute runInstaller under folder (DISK1/) with following command (runInstaller -jreLoc $JAVA_HOME) Make sure to install in following structure ([Installation Home]/Middleware/Oracle_WC1) Install WebCenter Content (Extend Middleware Home) Extract the compressed files (ofm_wcc_generic_11.1.1.6.0_disk1_1of2.zip & ofm_wcc_generic_11.1.1.6.0_disk1_2of2.zip) to the same temp folder Execute runInstaller under folder (DISK1/) with following command (runInstaller -jreLoc $JAVA_HOME) Make sure to install in following structure ([Installation Home]/Middleware/Oracle_ECM) Configure Initial Domain (Domain name webcenter) Execute configuration tool - [Installation Home]/Middleware/wlserver_10.3/common/bin/config Select "Create a New Weblogic Domain" Select following template (Basic Weblogic Server Domain, Oracle Enterprise Manager, Oracle WSM Policy Manager, Oracle JRF) Create new domain with name webcenter under following location ([Installation Home]/domains) for applications ([Installation Home]/applications) Select Production Mode Finish Configuration wizard Setup username for startup scripts - Add a new file called boot.properties to ([Installation Home]/domains/webcenter/servers/AdminServer/security)Add following lines to boot.propertiesusername=weblogicpassword=[password clear text, it will be encrypted during first start] Start AdminServer in the background ([Installation Home]/domains/webcenter/bin/startWeblogic) Install and Configure Oracle WebTier (OHS Server) Extract compressed file (ofm_webtier_linux_11.1.1.6.0_64_disk1_1of1.zip) to a temp folder Execute runInstaller under folder (DISK1/) with following command (runInstaller) Select Install & Configure option Deselect Oracle WebCache Auto Configure Ports Configure Schema's with RCU (Repository Creation Utility) Extract compressed file (ofm_rcu_linux_11.1.1.6.0_disk1_1of1.zip) to a temp folder Execute rcu with following command ([temp]/rcuHome/rcu) Make sure database meets RCU requirements, particular (PROCESSES is 200 or more) Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Using SQLPLUS and sys user tou can update this configuration in the database with following procedure:ALTER SYSTEM SET PROCESSES=200 SCOPE=SPFILE shutdown immediate startup Create and Configure following schemas:MDS - Meta Data Services (WebCenter and OWSM)WebCenter (WebCenter Schema)OCS (Oracle WebCenter Content)Activities (WebCenter Activities)OPSS (Policy Store for WebCenter) Remember selected schema prefix and password (will be used later) Configure WebCenter Portal instance (WC_CustomPortal) Execute following command to start configuration wizard ([Installation Home]/Middleware/Oracle_WC1/common/bin/config) Select Extend an Existing WebLogic domain Select the existing webcenter domain ([Installation Home]/domains/webcenter) Select Extend my domain using existing extension templateBrowse to ([Installation Home]/Middleware/Oracle_WC1/common/templates/applications)Select oracle.wc_custom_portal_template_11.1.1.jar Select to configure (Managed Servers/Clusters/Machines) On the Managed Server Screen you can now configure 1 or more WC_CustomPortal managed servers (name them WC_CustomPortal[n] (skip numbering if not clustered)) In case of two WC_CustomPortal Servers then create a Cluster (any name) and make sure the managed servers join the new cluster Create a new machine with same name as the current machine Make sure the AdminServer and WC_CustomPortal[n] managed servers joins the machine Finish the configuration wizard Stop AdminServer ([Installation Home]/domains/webcenter/bin/stopWeblogic) Start AdminServer in the background ([Installation Home]/domains/webcenter/bin/startWeblogic) Start WC_CustomPortal in the foreground (([Installation Home]/domains/webcenter/bin/startManagedServer WC_CustomPortal))- repeat for each WC_CustomPortal instance on the host Give credentials for weblogic user on start up Copy folder security including file boot.properties - from ([Installation Home]/domains/webcenter/servers/AdminServer/) to ([Installation Home]/domains/webcenter/servers/WC_CustomPortal/) Result should be ([Installation Home]/domains/webcenter/servers/WC_CustomPortal/security/boot.properties) Configure WebCenter Content instance (UCM_server1) Execute following command to start configuration wizard ([Installation Home]/Middleware/Oracle_ECM/common/bin/config) Select Extend an Existing WebLogic domain Select Oracle Universal Content Management - Content Server Select to configure (Managed Servers/Clusters/Machines) On Managed Server Screen create only one managed server instance (UCM_server1 on port 16200 (you can select any other available port)) Make sure the UCM_server1 managed server joins the machine Finish the configuration wizard Stop AdminServer ([Installation Home]/domains/webcenter/bin/stopWeblogic) Start AdminServer in the background ([Installation Home]/domains/webcenter/bin/startWeblogic) Start UCM_server1 in the foreground ([Installation Home]/domains/webcenter/bin/startManagedServer UCM_server1)Give credentials for weblogic user on start up Copy folder security including file boot.properties - from ([Installation Home]/domains/webcenter/servers/AdminServer/) to ([Installation Home]/domains/webcenter/servers/UCM_server1/ Result should be ([Installation Home]/domains/webcenter/servers/UCM_server1/security/boot.properties) Post Configure WebCenter Content instance for WebCenter Portal Open a browser where you have support for Java applets - navigate to http://host:port/cs WARNING: The page that you are presented with after authentication will only appear once for each instance WARNING: Make sure you set correct storage options - also remember to consider file sharing options if you like to cluster your Content Server instance over multiple hosts Set an appropriate Auto number prefix Update the Server Socket Port: Commonly set to (4444)  used for RIDC communication (a requirement for WebCenter Portal) Update the IP Address Filter to include the IP that is planned to access the server over RIDC - at the minimum add the ip address of the current host (this option can be updated later via EM) Stop UCM_server1 ([Installation Home]/domains/webcenter/bin/stopManagedServer UCM_server1) Start UCM_server1 in the background([Installation Home]/domains/webcenter/bin/startManagedServer UCM_server1) Open a browser where you have support for Java applets - navigate to http://host:port/cs Navigate to Administration/Admin Server Go to General ConfigurationCheck Enable AccountsIn Additional Configuration Variables (Add on two lines) Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} AllowUpdateForGenwww=1CollectionUseCache=1 Save the changes and go to Component Manager Click on the link advanced component manager Enable following components Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Folders_g, WebCenterConfigure, SiteStudio, SiteStudioExternalApplications, DBSearchContainsOpSupport WARNING: Make sure that following component is disabled: FrameworkFolders Stop UCM_server1 ([Installation Home]/domains/webcenter/bin/stopManagedServer UCM_server1) Start UCM_server1 in the background([Installation Home]/domains/webcenter/bin/startManagedServer UCM_server1) Open a browser where you have support for Java applets - navigate to http://host:port/cs Navigate to Administration/Site Studio Administration and update - Do not forget to save and submit each page Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Set Default ProjectSet Default WebAssets Post Configure Oracle WebTier (OHS) to include Content Server and WebCenter Portal application context Update following file - [Installation Home]/domains/instances/instance1/config/OHS/ohs1/mod_wl_ohs.conf For single add lines from following example: Link For clustered environment add lines from following template (note the clustering in example on applies to WC_CustomPortal): Link For more information on this: http://docs.oracle.com/cd/E23943_01/core.1111/e12037/contentsvr.htm#WCEDG318 Optional - Configure JOC Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Follow instructions: http://docs.oracle.com/cd/E23943_01/core.1111/e12037/extend_wc.htm#WCEDG264 Optional (Recommended) - Configure Node Manager Follow instructions: Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} http://docs.oracle.com/cd/E23943_01/core.1111/e12037/node_manager.htm#WCEDG277 Optional (Mandatory for clustered environments) - Re-Associate Policy Store to Database or OID Follow instructions: Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} http://docs.oracle.com/cd/E23943_01/webcenter.1111/e12405/wcadm_security_credstore.htm#CFHDEDJH Optional - Configure Coherence for Content Presenter Follow instructions in Blog Post (This post is for PS4): https://blogs.oracle.com/ATEAM_WEBCENTER/entry/enabling_coherence_for_content_presenter Other Recommended Post Cloning WebCenter Custom Portal - https://blogs.oracle.com/ATEAM_WEBCENTER/entry/cloning_a_webcenter_portal_managedImproving WebCenter Performance through caching - https://blogs.oracle.com/ATEAM_WEBCENTER/entry/improving_webcenter_performance

    Read the article

  • C#/.NET Fundamentals: Choosing the Right Collection Class

    - by James Michael Hare
    The .NET Base Class Library (BCL) has a wide array of collection classes at your disposal which make it easy to manage collections of objects. While it's great to have so many classes available, it can be daunting to choose the right collection to use for any given situation. As hard as it may be, choosing the right collection can be absolutely key to the performance and maintainability of your application! This post will look at breaking down any confusion between each collection and the situations in which they excel. We will be spending most of our time looking at the System.Collections.Generic namespace, which is the recommended set of collections. The Generic Collections: System.Collections.Generic namespace The generic collections were introduced in .NET 2.0 in the System.Collections.Generic namespace. This is the main body of collections you should tend to focus on first, as they will tend to suit 99% of your needs right up front. It is important to note that the generic collections are unsynchronized. This decision was made for performance reasons because depending on how you are using the collections its completely possible that synchronization may not be required or may be needed on a higher level than simple method-level synchronization. Furthermore, concurrent read access (all writes done at beginning and never again) is always safe, but for concurrent mixed access you should either synchronize the collection or use one of the concurrent collections. So let's look at each of the collections in turn and its various pros and cons, at the end we'll summarize with a table to help make it easier to compare and contrast the different collections. The Associative Collection Classes Associative collections store a value in the collection by providing a key that is used to add/remove/lookup the item. Hence, the container associates the value with the key. These collections are most useful when you need to lookup/manipulate a collection using a key value. For example, if you wanted to look up an order in a collection of orders by an order id, you might have an associative collection where they key is the order id and the value is the order. The Dictionary<TKey,TVale> is probably the most used associative container class. The Dictionary<TKey,TValue> is the fastest class for associative lookups/inserts/deletes because it uses a hash table under the covers. Because the keys are hashed, the key type should correctly implement GetHashCode() and Equals() appropriately or you should provide an external IEqualityComparer to the dictionary on construction. The insert/delete/lookup time of items in the dictionary is amortized constant time - O(1) - which means no matter how big the dictionary gets, the time it takes to find something remains relatively constant. This is highly desirable for high-speed lookups. The only downside is that the dictionary, by nature of using a hash table, is unordered, so you cannot easily traverse the items in a Dictionary in order. The SortedDictionary<TKey,TValue> is similar to the Dictionary<TKey,TValue> in usage but very different in implementation. The SortedDictionary<TKey,TValye> uses a binary tree under the covers to maintain the items in order by the key. As a consequence of sorting, the type used for the key must correctly implement IComparable<TKey> so that the keys can be correctly sorted. The sorted dictionary trades a little bit of lookup time for the ability to maintain the items in order, thus insert/delete/lookup times in a sorted dictionary are logarithmic - O(log n). Generally speaking, with logarithmic time, you can double the size of the collection and it only has to perform one extra comparison to find the item. Use the SortedDictionary<TKey,TValue> when you want fast lookups but also want to be able to maintain the collection in order by the key. The SortedList<TKey,TValue> is the other ordered associative container class in the generic containers. Once again SortedList<TKey,TValue>, like SortedDictionary<TKey,TValue>, uses a key to sort key-value pairs. Unlike SortedDictionary, however, items in a SortedList are stored as an ordered array of items. This means that insertions and deletions are linear - O(n) - because deleting or adding an item may involve shifting all items up or down in the list. Lookup time, however is O(log n) because the SortedList can use a binary search to find any item in the list by its key. So why would you ever want to do this? Well, the answer is that if you are going to load the SortedList up-front, the insertions will be slower, but because array indexing is faster than following object links, lookups are marginally faster than a SortedDictionary. Once again I'd use this in situations where you want fast lookups and want to maintain the collection in order by the key, and where insertions and deletions are rare. The Non-Associative Containers The other container classes are non-associative. They don't use keys to manipulate the collection but rely on the object itself being stored or some other means (such as index) to manipulate the collection. The List<T> is a basic contiguous storage container. Some people may call this a vector or dynamic array. Essentially it is an array of items that grow once its current capacity is exceeded. Because the items are stored contiguously as an array, you can access items in the List<T> by index very quickly. However inserting and removing in the beginning or middle of the List<T> are very costly because you must shift all the items up or down as you delete or insert respectively. However, adding and removing at the end of a List<T> is an amortized constant operation - O(1). Typically List<T> is the standard go-to collection when you don't have any other constraints, and typically we favor a List<T> even over arrays unless we are sure the size will remain absolutely fixed. The LinkedList<T> is a basic implementation of a doubly-linked list. This means that you can add or remove items in the middle of a linked list very quickly (because there's no items to move up or down in contiguous memory), but you also lose the ability to index items by position quickly. Most of the time we tend to favor List<T> over LinkedList<T> unless you are doing a lot of adding and removing from the collection, in which case a LinkedList<T> may make more sense. The HashSet<T> is an unordered collection of unique items. This means that the collection cannot have duplicates and no order is maintained. Logically, this is very similar to having a Dictionary<TKey,TValue> where the TKey and TValue both refer to the same object. This collection is very useful for maintaining a collection of items you wish to check membership against. For example, if you receive an order for a given vendor code, you may want to check to make sure the vendor code belongs to the set of vendor codes you handle. In these cases a HashSet<T> is useful for super-quick lookups where order is not important. Once again, like in Dictionary, the type T should have a valid implementation of GetHashCode() and Equals(), or you should provide an appropriate IEqualityComparer<T> to the HashSet<T> on construction. The SortedSet<T> is to HashSet<T> what the SortedDictionary<TKey,TValue> is to Dictionary<TKey,TValue>. That is, the SortedSet<T> is a binary tree where the key and value are the same object. This once again means that adding/removing/lookups are logarithmic - O(log n) - but you gain the ability to iterate over the items in order. For this collection to be effective, type T must implement IComparable<T> or you need to supply an external IComparer<T>. Finally, the Stack<T> and Queue<T> are two very specific collections that allow you to handle a sequential collection of objects in very specific ways. The Stack<T> is a last-in-first-out (LIFO) container where items are added and removed from the top of the stack. Typically this is useful in situations where you want to stack actions and then be able to undo those actions in reverse order as needed. The Queue<T> on the other hand is a first-in-first-out container which adds items at the end of the queue and removes items from the front. This is useful for situations where you need to process items in the order in which they came, such as a print spooler or waiting lines. So that's the basic collections. Let's summarize what we've learned in a quick reference table.  Collection Ordered? Contiguous Storage? Direct Access? Lookup Efficiency Manipulate Efficiency Notes Dictionary No Yes Via Key Key: O(1) O(1) Best for high performance lookups. SortedDictionary Yes No Via Key Key: O(log n) O(log n) Compromise of Dictionary speed and ordering, uses binary search tree. SortedList Yes Yes Via Key Key: O(log n) O(n) Very similar to SortedDictionary, except tree is implemented in an array, so has faster lookup on preloaded data, but slower loads. List No Yes Via Index Index: O(1) Value: O(n) O(n) Best for smaller lists where direct access required and no ordering. LinkedList No No No Value: O(n) O(1) Best for lists where inserting/deleting in middle is common and no direct access required. HashSet No Yes Via Key Key: O(1) O(1) Unique unordered collection, like a Dictionary except key and value are same object. SortedSet Yes No Via Key Key: O(log n) O(log n) Unique ordered collection, like SortedDictionary except key and value are same object. Stack No Yes Only Top Top: O(1) O(1)* Essentially same as List<T> except only process as LIFO Queue No Yes Only Front Front: O(1) O(1) Essentially same as List<T> except only process as FIFO   The Original Collections: System.Collections namespace The original collection classes are largely considered deprecated by developers and by Microsoft itself. In fact they indicate that for the most part you should always favor the generic or concurrent collections, and only use the original collections when you are dealing with legacy .NET code. Because these collections are out of vogue, let's just briefly mention the original collection and their generic equivalents: ArrayList A dynamic, contiguous collection of objects. Favor the generic collection List<T> instead. Hashtable Associative, unordered collection of key-value pairs of objects. Favor the generic collection Dictionary<TKey,TValue> instead. Queue First-in-first-out (FIFO) collection of objects. Favor the generic collection Queue<T> instead. SortedList Associative, ordered collection of key-value pairs of objects. Favor the generic collection SortedList<T> instead. Stack Last-in-first-out (LIFO) collection of objects. Favor the generic collection Stack<T> instead. In general, the older collections are non-type-safe and in some cases less performant than their generic counterparts. Once again, the only reason you should fall back on these older collections is for backward compatibility with legacy code and libraries only. The Concurrent Collections: System.Collections.Concurrent namespace The concurrent collections are new as of .NET 4.0 and are included in the System.Collections.Concurrent namespace. These collections are optimized for use in situations where multi-threaded read and write access of a collection is desired. The concurrent queue, stack, and dictionary work much as you'd expect. The bag and blocking collection are more unique. Below is the summary of each with a link to a blog post I did on each of them. ConcurrentQueue Thread-safe version of a queue (FIFO). For more information see: C#/.NET Little Wonders: The ConcurrentStack and ConcurrentQueue ConcurrentStack Thread-safe version of a stack (LIFO). For more information see: C#/.NET Little Wonders: The ConcurrentStack and ConcurrentQueue ConcurrentBag Thread-safe unordered collection of objects. Optimized for situations where a thread may be bother reader and writer. For more information see: C#/.NET Little Wonders: The ConcurrentBag and BlockingCollection ConcurrentDictionary Thread-safe version of a dictionary. Optimized for multiple readers (allows multiple readers under same lock). For more information see C#/.NET Little Wonders: The ConcurrentDictionary BlockingCollection Wrapper collection that implement producers & consumers paradigm. Readers can block until items are available to read. Writers can block until space is available to write (if bounded). For more information see C#/.NET Little Wonders: The ConcurrentBag and BlockingCollection Summary The .NET BCL has lots of collections built in to help you store and manipulate collections of data. Understanding how these collections work and knowing in which situations each container is best is one of the key skills necessary to build more performant code. Choosing the wrong collection for the job can make your code much slower or even harder to maintain if you choose one that doesn’t perform as well or otherwise doesn’t exactly fit the situation. Remember to avoid the original collections and stick with the generic collections.  If you need concurrent access, you can use the generic collections if the data is read-only, or consider the concurrent collections for mixed-access if you are running on .NET 4.0 or higher.   Tweet Technorati Tags: C#,.NET,Collecitons,Generic,Concurrent,Dictionary,List,Stack,Queue,SortedList,SortedDictionary,HashSet,SortedSet

    Read the article

  • Stored Procedures with SSRS? Hmm… not so much

    - by Rob Farley
    Little Bobby Tables’ mother says you should always sanitise your data input. Except that I think she’s wrong. The SQL Injection aspect is for another post, where I’ll show you why I think SQL Injection is the same kind of attack as many other attacks, such as the old buffer overflow, but here I want to have a bit of a whinge about the way that some people sanitise data input, and even have a whinge about people who insist on using stored procedures for SSRS reports. Let me say that again, in case you missed it the first time: I want to have a whinge about people who insist on using stored procedures for SSRS reports. Let’s look at the data input sanitisation aspect – except that I’m going to call it ‘parameter validation’. I’m talking about code that looks like this: create procedure dbo.GetMonthSummaryPerSalesPerson(@eomdate datetime) as begin     /* First check that @eomdate is a valid date */     if isdate(@eomdate) != 1     begin         select 'Please enter a valid date' as ErrorMessage;         return;     end     /* Then check that time has passed since @eomdate */     if datediff(day,@eomdate,sysdatetime()) < 5     begin         select 'Sorry - EOM is not complete yet' as ErrorMessage;         return;     end         /* If those checks have succeeded, return the data */     select SalesPersonID, count(*) as NumSales, sum(TotalDue) as TotalSales     from Sales.SalesOrderHeader     where OrderDate >= dateadd(month,-1,@eomdate)         and OrderDate < @eomdate     group by SalesPersonID     order by SalesPersonID; end Notice that the code checks that a date has been entered. Seriously??!! This must only be to check for NULL values being passed in, because anything else would have to be a valid datetime to avoid an error. The other check is maybe fair enough, but I still don’t like it. The two problems I have with this stored procedure are the result sets and the small fact that the stored procedure even exists in the first place. But let’s consider the first one of these problems for starters. I’ll get to the second one in a moment. If you read Jes Borland (@grrl_geek)’s recent post about returning multiple result sets in Reporting Services, you’ll be aware that Reporting Services doesn’t support multiple results sets from a single query. And when it says ‘single query’, it includes ‘stored procedure call’. It’ll only handle the first result set that comes back. But that’s okay – we have RETURN statements, so our stored procedure will only ever return a single result set.  Sometimes that result set might contain a single field called ErrorMessage, but it’s still only one result set. Except that it’s not okay, because Reporting Services needs to know what fields to expect. Your report needs to hook into your fields, so SSRS needs to have a way to get that information. For stored procs, it uses an option called FMTONLY. When Reporting Services tries to figure out what fields are going to be returned by a query (or stored procedure call), it doesn’t want to have to run the whole thing. That could take ages. (Maybe it’s seen some of the stored procedures I’ve had to deal with over the years!) So it turns on FMTONLY before it makes the call (and turns it off again afterwards). FMTONLY is designed to be able to figure out the shape of the output, without actually running the contents. It’s very useful, you might think. set fmtonly on exec dbo.GetMonthSummaryPerSalesPerson '20030401'; set fmtonly off Without the FMTONLY lines, this stored procedure returns a result set that has three columns and fourteen rows. But with FMTONLY turned on, those rows don’t come back. But what I do get back hurts Reporting Services. It doesn’t run the stored procedure at all. It just looks for anything that could be returned and pushes out a result set in that shape. Despite the fact that I’ve made sure that the logic will only ever return a single result set, the FMTONLY option kills me by returning three of them. It would have been much better to push these checks down into the query itself. alter procedure dbo.GetMonthSummaryPerSalesPerson(@eomdate datetime) as begin     select SalesPersonID, count(*) as NumSales, sum(TotalDue) as TotalSales     from Sales.SalesOrderHeader     where     /* Make sure that @eomdate is valid */         isdate(@eomdate) = 1     /* And that it's sufficiently past */     and datediff(day,@eomdate,sysdatetime()) >= 5     /* And now use it in the filter as appropriate */     and OrderDate >= dateadd(month,-1,@eomdate)     and OrderDate < @eomdate     group by SalesPersonID     order by SalesPersonID; end Now if we run it with FMTONLY turned on, we get the single result set back. But let’s consider the execution plan when we pass in an invalid date. First let’s look at one that returns data. I’ve got a semi-useful index in place on OrderDate, which includes the SalesPersonID and TotalDue fields. It does the job, despite a hefty Sort operation. …compared to one that uses a future date: You might notice that the estimated costs are similar – the Index Seek is still 28%, the Sort is still 71%. But the size of that arrow coming out of the Index Seek is a whole bunch smaller. The coolest thing here is what’s going on with that Index Seek. Let’s look at some of the properties of it. Glance down it with me… Estimated CPU cost of 0.0005728, 387 estimated rows, estimated subtree cost of 0.0044385, ForceSeek false, Number of Executions 0. That’s right – it doesn’t run. So much for reading plans right-to-left... The key is the Filter on the left of it. It has a Startup Expression Predicate in it, which means that it doesn’t call anything further down the plan (to the right) if the predicate evaluates to false. Using this method, we can make sure that our stored procedure contains a single query, and therefore avoid any problems with multiple result sets. If we wanted, we could always use UNION ALL to make sure that we can return an appropriate error message. alter procedure dbo.GetMonthSummaryPerSalesPerson(@eomdate datetime) as begin     select SalesPersonID, count(*) as NumSales, sum(TotalDue) as TotalSales, /*Placeholder: */ '' as ErrorMessage     from Sales.SalesOrderHeader     where     /* Make sure that @eomdate is valid */         isdate(@eomdate) = 1     /* And that it's sufficiently past */     and datediff(day,@eomdate,sysdatetime()) >= 5     /* And now use it in the filter as appropriate */     and OrderDate >= dateadd(month,-1,@eomdate)     and OrderDate < @eomdate     group by SalesPersonID     /* Now include the error messages */     union all     select 0, 0, 0, 'Please enter a valid date' as ErrorMessage     where isdate(@eomdate) != 1     union all     select 0, 0, 0, 'Sorry - EOM is not complete yet' as ErrorMessage     where datediff(day,@eomdate,sysdatetime()) < 5     order by SalesPersonID; end But still I don’t like it, because it’s now a stored procedure with a single query. And I don’t like stored procedures that should be functions. That’s right – I think this should be a function, and SSRS should call the function. And I apologise to those of you who are now planning a bonfire for me. Guy Fawkes’ night has already passed this year, so I think you miss out. (And I’m not going to remind you about when the PASS Summit is in 2012.) create function dbo.GetMonthSummaryPerSalesPerson(@eomdate datetime) returns table as return (     select SalesPersonID, count(*) as NumSales, sum(TotalDue) as TotalSales, '' as ErrorMessage     from Sales.SalesOrderHeader     where     /* Make sure that @eomdate is valid */         isdate(@eomdate) = 1     /* And that it's sufficiently past */     and datediff(day,@eomdate,sysdatetime()) >= 5     /* And now use it in the filter as appropriate */     and OrderDate >= dateadd(month,-1,@eomdate)     and OrderDate < @eomdate     group by SalesPersonID     union all     select 0, 0, 0, 'Please enter a valid date' as ErrorMessage     where isdate(@eomdate) != 1     union all     select 0, 0, 0, 'Sorry - EOM is not complete yet' as ErrorMessage     where datediff(day,@eomdate,sysdatetime()) < 5 ); We’ve had to lose the ORDER BY – but that’s fine, as that’s a client thing anyway. We can have our reports leverage this stored query still, but we’re recognising that it’s a query, not a procedure. A procedure is designed to DO stuff, not just return data. We even get entries in sys.columns that confirm what the shape of the result set actually is, which makes sense, because a table-valued function is the right mechanism to return data. And we get so much more flexibility with this. If you haven’t seen the simplification stuff that I’ve preached on before, jump over to http://bit.ly/SimpleRob and watch the video of when I broke a microphone and nearly fell off the stage in Wales. You’ll see the impact of being able to have a simplifiable query. You can also read the procedural functions post I wrote recently, if you didn’t follow the link from a few paragraphs ago. So if we want the list of SalesPeople that made any kind of sales in a given month, we can do something like: select SalesPersonID from dbo.GetMonthSummaryPerSalesPerson(@eomonth) order by SalesPersonID; This doesn’t need to look up the TotalDue field, which makes a simpler plan. select * from dbo.GetMonthSummaryPerSalesPerson(@eomonth) where SalesPersonID is not null order by SalesPersonID; This one can avoid having to do the work on the rows that don’t have a SalesPersonID value, pushing the predicate into the Index Seek rather than filtering the results that come back to the report. If we had joins involved, we might see some of those being simplified out. We also get the ability to include query hints in individual reports. We shift from having a single-use stored procedure to having a reusable stored query – and isn’t that one of the main points of modularisation? Stored procedures in Reporting Services are just a bit limited for my liking. They’re useful in plenty of ways, but if you insist on using stored procedures all the time rather that queries that use functions – that’s rubbish. @rob_farley

    Read the article

  • LWJGL Voxel game, glDrawArrays

    - by user22015
    I've been learning about 3D for a couple days now. I managed to create a chunk (8x8x8). Add optimization so it only renders the active and visible blocks. Then I added so it only draws the faces which don't have a neighbor. Next what I found from online research was that it is better to use glDrawArrays to increase performance. So I restarted my little project. Render an entire chunck, add optimization so it only renders active and visible blocks. But now I want to add so it only draws the visible faces while using glDrawArrays. This is giving me some trouble with calling glDrawArrays because I'm passing a wrong count parameter. > # A fatal error has been detected by the Java Runtime Environment: > # > # EXCEPTION_ACCESS_VIOLATION (0xc0000005) at pc=0x0000000006e31a03, pid=1032, tid=3184 > # Stack: [0x00000000023a0000,0x00000000024a0000], sp=0x000000000249ef70, free space=1019k Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, C=native code) C [ig4icd64.dll+0xa1a03] Java frames: (J=compiled Java code, j=interpreted, Vv=VM code) j org.lwjgl.opengl.GL11.nglDrawArrays(IIIJ)V+0 j org.lwjgl.opengl.GL11.glDrawArrays(III)V+20 j com.vox.block.Chunk.render()V+410 j com.vox.ChunkManager.render()V+30 j com.vox.Game.render()V+11 j com.vox.GameHandler.render()V+12 j com.vox.GameHandler.gameLoop()V+15 j com.vox.Main.main([Ljava/lang/StringV+13 v ~StubRoutines::call_stub public class Chunk { public final static int[] DIM = { 8, 8, 8}; public final static int CHUNK_SIZE = (DIM[0] * DIM[1] * DIM[2]); Block[][][] blocks; private int index; private int vBOVertexHandle; private int vBOColorHandle; public Chunk(int index) { this.index = index; vBOColorHandle = GL15.glGenBuffers(); vBOVertexHandle = GL15.glGenBuffers(); blocks = new Block[DIM[0]][DIM[1]][DIM[2]]; for(int x = 0; x < DIM[0]; x++){ for(int y = 0; y < DIM[1]; y++){ for(int z = 0; z < DIM[2]; z++){ blocks[x][y][z] = new Block(); } } } } public void render(){ Block curr; FloatBuffer vertexPositionData2 = BufferUtils.createFloatBuffer(CHUNK_SIZE * 6 * 12); FloatBuffer vertexColorData2 = BufferUtils.createFloatBuffer(CHUNK_SIZE * 6 * 12); int counter = 0; for(int x = 0; x < DIM[0]; x++){ for(int y = 0; y < DIM[1]; y++){ for(int z = 0; z < DIM[2]; z++){ curr = blocks[x][y][z]; boolean[] neightbours = validateNeightbours(x, y, z); if(curr.isActive() && !neightbours[6]) { float[] arr = curr.createCube((index*DIM[0]*Block.BLOCK_SIZE*2) + x*2, y*2, z*2, neightbours); counter += arr.length; vertexPositionData2.put(arr); vertexColorData2.put(createCubeVertexCol(curr.getCubeColor())); } } } } vertexPositionData2.flip(); vertexPositionData2.flip(); FloatBuffer vertexPositionData = BufferUtils.createFloatBuffer(vertexColorData2.position()); FloatBuffer vertexColorData = BufferUtils.createFloatBuffer(vertexColorData2.position()); for(int i = 0; i < vertexPositionData2.position(); i++) vertexPositionData.put(vertexPositionData2.get(i)); for(int i = 0; i < vertexColorData2.position(); i++) vertexColorData.put(vertexColorData2.get(i)); vertexColorData.flip(); vertexPositionData.flip(); GL15.glBindBuffer(GL15.GL_ARRAY_BUFFER, vBOVertexHandle); GL15.glBufferData(GL15.GL_ARRAY_BUFFER, vertexPositionData, GL15.GL_STATIC_DRAW); GL15.glBindBuffer(GL15.GL_ARRAY_BUFFER, 0); GL15.glBindBuffer(GL15.GL_ARRAY_BUFFER, vBOColorHandle); GL15.glBufferData(GL15.GL_ARRAY_BUFFER, vertexColorData, GL15.GL_STATIC_DRAW); GL15.glBindBuffer(GL15.GL_ARRAY_BUFFER, 0); GL11.glPushMatrix(); GL15.glBindBuffer(GL15.GL_ARRAY_BUFFER, vBOVertexHandle); GL11.glVertexPointer(3, GL11.GL_FLOAT, 0, 0L); GL15.glBindBuffer(GL15.GL_ARRAY_BUFFER, vBOColorHandle); GL11.glColorPointer(3, GL11.GL_FLOAT, 0, 0L); System.out.println("Counter " + counter); GL11.glDrawArrays(GL11.GL_LINE_LOOP, 0, counter); GL11.glPopMatrix(); //blocks[r.nextInt(DIM[0])][2][r.nextInt(DIM[2])].setActive(false); } //Random r = new Random(); private float[] createCubeVertexCol(float[] CubeColorArray) { float[] cubeColors = new float[CubeColorArray.length * 4 * 6]; for (int i = 0; i < cubeColors.length; i++) { cubeColors[i] = CubeColorArray[i % CubeColorArray.length]; } return cubeColors; } private boolean[] validateNeightbours(int x, int y, int z) { boolean[] bools = new boolean[7]; bools[6] = true; bools[6] = bools[6] && (bools[0] = y > 0 && y < DIM[1]-1 && blocks[x][y+1][z].isActive());//top bools[6] = bools[6] && (bools[1] = y > 0 && y < DIM[1]-1 && blocks[x][y-1][z].isActive());//bottom bools[6] = bools[6] && (bools[2] = z > 0 && z < DIM[2]-1 && blocks[x][y][z+1].isActive());//front bools[6] = bools[6] && (bools[3] = z > 0 && z < DIM[2]-1 && blocks[x][y][z-1].isActive());//back bools[6] = bools[6] && (bools[4] = x > 0 && x < DIM[0]-1 && blocks[x+1][y][z].isActive());//left bools[6] = bools[6] && (bools[5] = x > 0 && x < DIM[0]-1 && blocks[x-1][y][z].isActive());//right return bools; } } public class Block { public static final float BLOCK_SIZE = 1f; public enum BlockType { Default(0), Grass(1), Dirt(2), Water(3), Stone(4), Wood(5), Sand(6), LAVA(7); int BlockID; BlockType(int i) { BlockID=i; } } private boolean active; private BlockType type; public Block() { this(BlockType.Default); } public Block(BlockType type){ active = true; this.type = type; } public float[] getCubeColor() { switch (type.BlockID) { case 1: return new float[] { 1, 1, 0 }; case 2: return new float[] { 1, 0.5f, 0 }; case 3: return new float[] { 0, 0f, 1f }; default: return new float[] {0.5f, 0.5f, 1f}; } } public float[] createCube(float x, float y, float z, boolean[] neightbours){ int counter = 0; for(boolean b : neightbours) if(!b) counter++; float[] array = new float[counter*12]; int offset = 0; if(!neightbours[0]){//top array[offset++] = x*BLOCK_SIZE + BLOCK_SIZE; array[offset++] = y*BLOCK_SIZE + BLOCK_SIZE; array[offset++] = z*BLOCK_SIZE - BLOCK_SIZE; array[offset++] = x*BLOCK_SIZE - BLOCK_SIZE; array[offset++] = y*BLOCK_SIZE + BLOCK_SIZE; array[offset++] = z*BLOCK_SIZE - BLOCK_SIZE; array[offset++] = x*BLOCK_SIZE - BLOCK_SIZE; array[offset++] = y*BLOCK_SIZE + BLOCK_SIZE; array[offset++] = z*BLOCK_SIZE + BLOCK_SIZE; array[offset++] = x*BLOCK_SIZE + BLOCK_SIZE; array[offset++] = y*BLOCK_SIZE + BLOCK_SIZE; array[offset++] = z*BLOCK_SIZE + BLOCK_SIZE; } if(!neightbours[1]){//bottom array[offset++] = x*BLOCK_SIZE + BLOCK_SIZE; array[offset++] = y*BLOCK_SIZE - BLOCK_SIZE; array[offset++] = z*BLOCK_SIZE + BLOCK_SIZE; array[offset++] = x*BLOCK_SIZE - BLOCK_SIZE; array[offset++] = y*BLOCK_SIZE - BLOCK_SIZE; array[offset++] = z*BLOCK_SIZE + BLOCK_SIZE; array[offset++] = x*BLOCK_SIZE - BLOCK_SIZE; array[offset++] = y*BLOCK_SIZE - BLOCK_SIZE; array[offset++] = z*BLOCK_SIZE - BLOCK_SIZE; array[offset++] = x*BLOCK_SIZE + BLOCK_SIZE; array[offset++] = y*BLOCK_SIZE - BLOCK_SIZE; array[offset++] = z*BLOCK_SIZE - BLOCK_SIZE; } if(!neightbours[2]){//front array[offset++] = x*BLOCK_SIZE + BLOCK_SIZE; array[offset++] = y*BLOCK_SIZE + BLOCK_SIZE; array[offset++] = z*BLOCK_SIZE + BLOCK_SIZE; array[offset++] = x*BLOCK_SIZE - BLOCK_SIZE; array[offset++] = y*BLOCK_SIZE + BLOCK_SIZE; array[offset++] = z*BLOCK_SIZE + BLOCK_SIZE; array[offset++] = x*BLOCK_SIZE - BLOCK_SIZE; array[offset++] = y*BLOCK_SIZE - BLOCK_SIZE; array[offset++] = z*BLOCK_SIZE + BLOCK_SIZE; array[offset++] = x*BLOCK_SIZE + BLOCK_SIZE; array[offset++] = y*BLOCK_SIZE - BLOCK_SIZE; array[offset++] = z*BLOCK_SIZE + BLOCK_SIZE; } if(!neightbours[3]){//back array[offset++] = x*BLOCK_SIZE + BLOCK_SIZE; array[offset++] = y*BLOCK_SIZE - BLOCK_SIZE; array[offset++] = z*BLOCK_SIZE - BLOCK_SIZE; array[offset++] = x*BLOCK_SIZE - BLOCK_SIZE; array[offset++] = y*BLOCK_SIZE - BLOCK_SIZE; array[offset++] = z*BLOCK_SIZE - BLOCK_SIZE; array[offset++] = x*BLOCK_SIZE - BLOCK_SIZE; array[offset++] = y*BLOCK_SIZE + BLOCK_SIZE; array[offset++] = z*BLOCK_SIZE - BLOCK_SIZE; array[offset++] = x*BLOCK_SIZE + BLOCK_SIZE; array[offset++] = y*BLOCK_SIZE + BLOCK_SIZE; array[offset++] = z*BLOCK_SIZE - BLOCK_SIZE; } if(!neightbours[4]){//left array[offset++] = x*BLOCK_SIZE - BLOCK_SIZE; array[offset++] = y*BLOCK_SIZE + BLOCK_SIZE; array[offset++] = z*BLOCK_SIZE + BLOCK_SIZE; array[offset++] = x*BLOCK_SIZE - BLOCK_SIZE; array[offset++] = y*BLOCK_SIZE + BLOCK_SIZE; array[offset++] = z*BLOCK_SIZE - BLOCK_SIZE; array[offset++] = x*BLOCK_SIZE - BLOCK_SIZE; array[offset++] = y*BLOCK_SIZE - BLOCK_SIZE; array[offset++] = z*BLOCK_SIZE - BLOCK_SIZE; array[offset++] = x*BLOCK_SIZE - BLOCK_SIZE; array[offset++] = y*BLOCK_SIZE - BLOCK_SIZE; array[offset++] = z*BLOCK_SIZE + BLOCK_SIZE; } if(!neightbours[5]){//right array[offset++] = x*BLOCK_SIZE + BLOCK_SIZE; array[offset++] = y*BLOCK_SIZE + BLOCK_SIZE; array[offset++] = z*BLOCK_SIZE - BLOCK_SIZE; array[offset++] = x*BLOCK_SIZE + BLOCK_SIZE; array[offset++] = y*BLOCK_SIZE + BLOCK_SIZE; array[offset++] = z*BLOCK_SIZE + BLOCK_SIZE; array[offset++] = x*BLOCK_SIZE + BLOCK_SIZE; array[offset++] = y*BLOCK_SIZE - BLOCK_SIZE; array[offset++] = z*BLOCK_SIZE + BLOCK_SIZE; array[offset++] = x*BLOCK_SIZE + BLOCK_SIZE; array[offset++] = y*BLOCK_SIZE - BLOCK_SIZE; array[offset++] = z*BLOCK_SIZE - BLOCK_SIZE; } return Arrays.copyOf(array, offset); } public boolean isActive() { return active; } public void setActive(boolean active) { this.active = active; } public BlockType getType() { return type; } public void setType(BlockType type) { this.type = type; } } I highlighted the code I'm concerned about in this following screenshot: - http://imageshack.us/a/img820/7606/18626782.png - (Not allowed to upload images yet) I know the code is a mess but I'm just testing stuff so I wasn't really thinking about it.

    Read the article

  • A jQuery Plug-in to monitor Html Element CSS Changes

    - by Rick Strahl
    Here's a scenario I've run into on a few occasions: I need to be able to monitor certain CSS properties on an HTML element and know when that CSS element changes. The need for this arose out of wanting to build generic components that could 'attach' themselves to other objects and monitor changes on the ‘parent’ object so the dependent object can adjust itself accordingly. What I wanted to create is a jQuery plug-in that allows me to specify a list of CSS properties to monitor and have a function fire in response to any change to any of those CSS properties. The result are the .watch() and .unwatch() jQuery plug-ins. Here’s a simple example page of this plug-in that demonstrates tracking changes to an element being moved with draggable and closable behavior: http://www.west-wind.com/WestWindWebToolkit/samples/Ajax/jQueryPluginSamples/WatcherPlugin.htm Try it with different browsers – IE and FireFox use the DOM event handlers and Chrome, Safari and Opera use setInterval handlers to manage this behavior. It should work in all of them but all but IE and FireFox will show a bit of lag between the changes in the main element and the shadow. The relevant HTML for this example is this fragment of a main <div> (#notebox) and an element that is to mimic a shadow (#shadow). <div class="containercontent"> <div id="notebox" style="width: 200px; height: 150px;position: absolute; z-index: 20; padding: 20px; background-color: lightsteelblue;"> Go ahead drag me around and close me! </div> <div id="shadow" style="background-color: Gray; z-index: 19;position:absolute;display: none;"> </div> </div> The watcher plug in is then applied to the main <div> and shadow in sync with the following plug-in code: <script type="text/javascript"> $(document).ready(function () { var counter = 0; $("#notebox").watch("top,left,height,width,display,opacity", function (data, i) { var el = $(this); var sh = $("#shadow"); var propChanged = data.props[i]; var valChanged = data.vals[i]; counter++; showStatus("Prop: " + propChanged + " value: " + valChanged + " " + counter); var pos = el.position(); var w = el.outerWidth(); var h = el.outerHeight(); sh.css({ width: w, height: h, left: pos.left + 5, top: pos.top + 5, display: el.css("display"), opacity: el.css("opacity") }); }) .draggable() .closable() .css("left", 10); }); </script> When you run this page as you drag the #notebox element the #shadow element will maintain and stay pinned underneath the #notebox element effectively keeping the shadow attached to the main element. Likewise, if you hide or fadeOut() the #notebox element the shadow will also go away – show the #notebox element and the shadow also re-appears because we are assigning the display property from the parent on the shadow. Note we’re attaching the .watch() plug-in to the #notebox element and have it fire whenever top,left,height,width,opacity or display CSS properties are changed. The passed data element contains a props[] and vals[] array that holds the properties monitored and their current values. An index passed as the second parm tells you which property has changed and what its current value is (propChanged/valChanged in the code above). The rest of the watcher handler code then deals with figuring out the main element’s position and recalculating and setting the shadow’s position using the jQuery .css() function. Note that this is just an example to demonstrate the watch() behavior here – this is not the best way to create a shadow. If you’re interested in a more efficient and cleaner way to handle shadows with a plug-in check out the .shadow() plug-in in ww.jquery.js (code search for fn.shadow) which uses native CSS features when available but falls back to a tracked shadow element on browsers that don’t support it, which is how this watch() plug-in came about in the first place :-) How does it work? The plug-in works by letting the user specify a list of properties to monitor as a comma delimited string and a handler function: el.watch("top,left,height,width,display,opacity", function (data, i) {}, 100, id) You can also specify an interval (if no DOM event monitoring isn’t available in the browser) and an ID that identifies the event handler uniquely. The watch plug-in works by hooking up to DOMAttrModified in FireFox, to onPropertyChanged in Internet Explorer, or by using a timer with setInterval to handle the detection of changes for other browsers. Unfortunately WebKit doesn’t support DOMAttrModified consistently at the moment so Safari and Chrome currently have to use the slower setInterval mechanism. In response to a changed property (or a setInterval timer hit) a JavaScript handler is fired which then runs through all the properties monitored and determines if and which one has changed. The DOM events fire on all property/style changes so the intermediate plug-in handler filters only those hits we’re interested in. If one of our monitored properties has changed the specified event handler function is called along with a data object and an index that identifies the property that’s changed in the data.props/data.vals arrays. The jQuery plugin to implement this functionality looks like this: (function($){ $.fn.watch = function (props, func, interval, id) { /// <summary> /// Allows you to monitor changes in a specific /// CSS property of an element by polling the value. /// when the value changes a function is called. /// The function called is called in the context /// of the selected element (ie. this) /// </summary> /// <param name="prop" type="String">CSS Properties to watch sep. by commas</param> /// <param name="func" type="Function"> /// Function called when the value has changed. /// </param> /// <param name="interval" type="Number"> /// Optional interval for browsers that don't support DOMAttrModified or propertychange events. /// Determines the interval used for setInterval calls. /// </param> /// <param name="id" type="String">A unique ID that identifies this watch instance on this element</param> /// <returns type="jQuery" /> if (!interval) interval = 100; if (!id) id = "_watcher"; return this.each(function () { var _t = this; var el$ = $(this); var fnc = function () { __watcher.call(_t, id) }; var data = { id: id, props: props.split(","), vals: [props.split(",").length], func: func, fnc: fnc, origProps: props, interval: interval, intervalId: null }; // store initial props and values $.each(data.props, function (i) { data.vals[i] = el$.css(data.props[i]); }); el$.data(id, data); hookChange(el$, id, data); }); function hookChange(el$, id, data) { el$.each(function () { var el = $(this); if (typeof (el.get(0).onpropertychange) == "object") el.bind("propertychange." + id, data.fnc); else if ($.browser.mozilla) el.bind("DOMAttrModified." + id, data.fnc); else data.intervalId = setInterval(data.fnc, interval); }); } function __watcher(id) { var el$ = $(this); var w = el$.data(id); if (!w) return; var _t = this; if (!w.func) return; // must unbind or else unwanted recursion may occur el$.unwatch(id); var changed = false; var i = 0; for (i; i < w.props.length; i++) { var newVal = el$.css(w.props[i]); if (w.vals[i] != newVal) { w.vals[i] = newVal; changed = true; break; } } if (changed) w.func.call(_t, w, i); // rebind event hookChange(el$, id, w); } } $.fn.unwatch = function (id) { this.each(function () { var el = $(this); var data = el.data(id); try { if (typeof (this.onpropertychange) == "object") el.unbind("propertychange." + id, data.fnc); else if ($.browser.mozilla) el.unbind("DOMAttrModified." + id, data.fnc); else clearInterval(data.intervalId); } // ignore if element was already unbound catch (e) { } }); return this; } })(jQuery); Note that there’s a corresponding .unwatch() plug-in that can be used to stop monitoring properties. The ID parameter is optional both on watch() and unwatch() – a standard name is used if you don’t specify one, but it’s a good idea to use unique names for each element watched to avoid overlap in event ids especially if you’re monitoring many elements. The syntax is: $.fn.watch = function(props, func, interval, id) props A comma delimited list of CSS style properties that are to be watched for changes. If any of the specified properties changes the function specified in the second parameter is fired. func The function fired in response to a changed styles. Receives this as the element changed and an object parameter that represents the watched properties and their respective values. The first parameter is passed in this structure: { id: watcherId, props: [], vals: [], func: thisFunc, fnc: internalHandler, origProps: strPropertyListOnWatcher }; A second parameter is the index of the changed property so data.props[i] or data.vals[i] gets the property and changed value. interval The interval for setInterval() for those browsers that don't support property watching in the DOM. In milliseconds. id An optional id that identifies this watcher. Required only if multiple watchers might be hooked up to the same element. The default is _watcher if not specified. It’s been a Journey I started building this plug-in about two years ago and had to make many modifications to it in response to changes in jQuery and also in browser behaviors. I think the latest round of changes made should make this plug-in fairly future proof going forward (although I hope there will be better cross-browser change event notifications in the future). One of the big problems I ran into had to do with recursive change notifications – it looks like starting with jQuery 1.44 and later, jQuery internally modifies element properties on some calls to some .css()  property retrievals and things like outerHeight/Width(). In IE this would cause nasty lock up issues at times. In response to this I changed the code to unbind the events when the handler function is called and then rebind when it exits. This also makes user code less prone to stack overflow recursion as you can actually change properties on the base element. It also means though that if you change one of the monitors properties in the handler the watch() handler won’t fire in response – you need to resort to a setTimeout() call instead to force the code to run outside of the handler: $("#notebox") el.watch("top,left,height,width,display,opacity", function (data, i) { var el = $(this); … // this makes el changes work setTimeout(function () { el.css("top", 10) },10); }) Since I’ve built this component I’ve had a lot of good uses for it. The .shadow() fallback functionality is one of them. Resources The watch() plug-in is part of ww.jquery.js and the West Wind West Wind Web Toolkit. You’re free to use this code here or the code from the toolkit. West Wind Web Toolkit Latest version of ww.jquery.js (search for fn.watch) watch plug-in documentation © Rick Strahl, West Wind Technologies, 2005-2011Posted in ASP.NET  JavaScript  jQuery  

    Read the article

  • A ToDynamic() Extension Method For Fluent Reflection

    - by Dixin
    Recently I needed to demonstrate some code with reflection, but I felt it inconvenient and tedious. To simplify the reflection coding, I created a ToDynamic() extension method. The source code can be downloaded from here. Problem One example for complex reflection is in LINQ to SQL. The DataContext class has a property Privider, and this Provider has an Execute() method, which executes the query expression and returns the result. Assume this Execute() needs to be invoked to query SQL Server database, then the following code will be expected: using (NorthwindDataContext database = new NorthwindDataContext()) { // Constructs the query. IQueryable<Product> query = database.Products.Where(product => product.ProductID > 0) .OrderBy(product => product.ProductName) .Take(2); // Executes the query. Here reflection is required, // because Provider, Execute(), and ReturnValue are not public members. IEnumerable<Product> results = database.Provider.Execute(query.Expression).ReturnValue; // Processes the results. foreach (Product product in results) { Console.WriteLine("{0}, {1}", product.ProductID, product.ProductName); } } Of course, this code cannot compile. And, no one wants to write code like this. Again, this is just an example of complex reflection. using (NorthwindDataContext database = new NorthwindDataContext()) { // Constructs the query. IQueryable<Product> query = database.Products.Where(product => product.ProductID > 0) .OrderBy(product => product.ProductName) .Take(2); // database.Provider PropertyInfo providerProperty = database.GetType().GetProperty( "Provider", BindingFlags.NonPublic | BindingFlags.GetProperty | BindingFlags.Instance); object provider = providerProperty.GetValue(database, null); // database.Provider.Execute(query.Expression) // Here GetMethod() cannot be directly used, // because Execute() is a explicitly implemented interface method. Assembly assembly = Assembly.Load("System.Data.Linq"); Type providerType = assembly.GetTypes().SingleOrDefault( type => type.FullName == "System.Data.Linq.Provider.IProvider"); InterfaceMapping mapping = provider.GetType().GetInterfaceMap(providerType); MethodInfo executeMethod = mapping.InterfaceMethods.Single(method => method.Name == "Execute"); IExecuteResult executeResult = executeMethod.Invoke(provider, new object[] { query.Expression }) as IExecuteResult; // database.Provider.Execute(query.Expression).ReturnValue IEnumerable<Product> results = executeResult.ReturnValue as IEnumerable<Product>; // Processes the results. foreach (Product product in results) { Console.WriteLine("{0}, {1}", product.ProductID, product.ProductName); } } This may be not straight forward enough. So here a solution will implement fluent reflection with a ToDynamic() extension method: IEnumerable<Product> results = database.ToDynamic() // Starts fluent reflection. .Provider.Execute(query.Expression).ReturnValue; C# 4.0 dynamic In this kind of scenarios, it is easy to have dynamic in mind, which enables developer to write whatever code after a dot: using (NorthwindDataContext database = new NorthwindDataContext()) { // Constructs the query. IQueryable<Product> query = database.Products.Where(product => product.ProductID > 0) .OrderBy(product => product.ProductName) .Take(2); // database.Provider dynamic dynamicDatabase = database; dynamic results = dynamicDatabase.Provider.Execute(query).ReturnValue; } This throws a RuntimeBinderException at runtime: 'System.Data.Linq.DataContext.Provider' is inaccessible due to its protection level. Here dynamic is able find the specified member. So the next thing is just writing some custom code to access the found member. .NET 4.0 DynamicObject, and DynamicWrapper<T> Where to put the custom code for dynamic? The answer is DynamicObject’s derived class. I first heard of DynamicObject from Anders Hejlsberg's video in PDC2008. It is very powerful, providing useful virtual methods to be overridden, like: TryGetMember() TrySetMember() TryInvokeMember() etc.  (In 2008 they are called GetMember, SetMember, etc., with different signature.) For example, if dynamicDatabase is a DynamicObject, then the following code: dynamicDatabase.Provider will invoke dynamicDatabase.TryGetMember() to do the actual work, where custom code can be put into. Now create a type to inherit DynamicObject: public class DynamicWrapper<T> : DynamicObject { private readonly bool _isValueType; private readonly Type _type; private T _value; // Not readonly, for value type scenarios. public DynamicWrapper(ref T value) // Uses ref in case of value type. { if (value == null) { throw new ArgumentNullException("value"); } this._value = value; this._type = value.GetType(); this._isValueType = this._type.IsValueType; } public override bool TryGetMember(GetMemberBinder binder, out object result) { // Searches in current type's public and non-public properties. PropertyInfo property = this._type.GetTypeProperty(binder.Name); if (property != null) { result = property.GetValue(this._value, null).ToDynamic(); return true; } // Searches in explicitly implemented properties for interface. MethodInfo method = this._type.GetInterfaceMethod(string.Concat("get_", binder.Name), null); if (method != null) { result = method.Invoke(this._value, null).ToDynamic(); return true; } // Searches in current type's public and non-public fields. FieldInfo field = this._type.GetTypeField(binder.Name); if (field != null) { result = field.GetValue(this._value).ToDynamic(); return true; } // Searches in base type's public and non-public properties. property = this._type.GetBaseProperty(binder.Name); if (property != null) { result = property.GetValue(this._value, null).ToDynamic(); return true; } // Searches in base type's public and non-public fields. field = this._type.GetBaseField(binder.Name); if (field != null) { result = field.GetValue(this._value).ToDynamic(); return true; } // The specified member is not found. result = null; return false; } // Other overridden methods are not listed. } In the above code, GetTypeProperty(), GetInterfaceMethod(), GetTypeField(), GetBaseProperty(), and GetBaseField() are extension methods for Type class. For example: internal static class TypeExtensions { internal static FieldInfo GetBaseField(this Type type, string name) { Type @base = type.BaseType; if (@base == null) { return null; } return @base.GetTypeField(name) ?? @base.GetBaseField(name); } internal static PropertyInfo GetBaseProperty(this Type type, string name) { Type @base = type.BaseType; if (@base == null) { return null; } return @base.GetTypeProperty(name) ?? @base.GetBaseProperty(name); } internal static MethodInfo GetInterfaceMethod(this Type type, string name, params object[] args) { return type.GetInterfaces().Select(type.GetInterfaceMap).SelectMany(mapping => mapping.TargetMethods) .FirstOrDefault( method => method.Name.Split('.').Last().Equals(name, StringComparison.Ordinal) && method.GetParameters().Count() == args.Length && method.GetParameters().Select( (parameter, index) => parameter.ParameterType.IsAssignableFrom(args[index].GetType())).Aggregate( true, (a, b) => a && b)); } internal static FieldInfo GetTypeField(this Type type, string name) { return type.GetFields( BindingFlags.GetField | BindingFlags.Instance | BindingFlags.Static | BindingFlags.Public | BindingFlags.NonPublic).FirstOrDefault( field => field.Name.Equals(name, StringComparison.Ordinal)); } internal static PropertyInfo GetTypeProperty(this Type type, string name) { return type.GetProperties( BindingFlags.GetProperty | BindingFlags.Instance | BindingFlags.Static | BindingFlags.Public | BindingFlags.NonPublic).FirstOrDefault( property => property.Name.Equals(name, StringComparison.Ordinal)); } // Other extension methods are not listed. } So now, when invoked, TryGetMember() searches the specified member and invoke it. The code can be written like this: dynamic dynamicDatabase = new DynamicWrapper<NorthwindDataContext>(ref database); dynamic dynamicReturnValue = dynamicDatabase.Provider.Execute(query.Expression).ReturnValue; This greatly simplified reflection. ToDynamic() and fluent reflection To make it even more straight forward, A ToDynamic() method is provided: public static class DynamicWrapperExtensions { public static dynamic ToDynamic<T>(this T value) { return new DynamicWrapper<T>(ref value); } } and a ToStatic() method is provided to unwrap the value: public class DynamicWrapper<T> : DynamicObject { public T ToStatic() { return this._value; } } In the above TryGetMember() method, please notice it does not output the member’s value, but output a wrapped member value (that is, memberValue.ToDynamic()). This is very important to make the reflection fluent. Now the code becomes: IEnumerable<Product> results = database.ToDynamic() // Here starts fluent reflection. .Provider.Execute(query.Expression).ReturnValue .ToStatic(); // Unwraps to get the static value. With the help of TryConvert(): public class DynamicWrapper<T> : DynamicObject { public override bool TryConvert(ConvertBinder binder, out object result) { result = this._value; return true; } } ToStatic() can be omitted: IEnumerable<Product> results = database.ToDynamic() .Provider.Execute(query.Expression).ReturnValue; // Automatically converts to expected static value. Take a look at the reflection code at the beginning of this post again. Now it is much much simplified! Special scenarios In 90% of the scenarios ToDynamic() is enough. But there are some special scenarios. Access static members Using extension method ToDynamic() for accessing static members does not make sense. Instead, DynamicWrapper<T> has a parameterless constructor to handle these scenarios: public class DynamicWrapper<T> : DynamicObject { public DynamicWrapper() // For static. { this._type = typeof(T); this._isValueType = this._type.IsValueType; } } The reflection code should be like this: dynamic wrapper = new DynamicWrapper<StaticClass>(); int value = wrapper._value; int result = wrapper.PrivateMethod(); So accessing static member is also simple, and fluent of course. Change instances of value types Value type is much more complex. The main problem is, value type is copied when passing to a method as a parameter. This is why ref keyword is used for the constructor. That is, if a value type instance is passed to DynamicWrapper<T>, the instance itself will be stored in this._value of DynamicWrapper<T>. Without the ref keyword, when this._value is changed, the value type instance itself does not change. Consider FieldInfo.SetValue(). In the value type scenarios, invoking FieldInfo.SetValue(this._value, value) does not change this._value, because it changes the copy of this._value. I searched the Web and found a solution for setting the value of field: internal static class FieldInfoExtensions { internal static void SetValue<T>(this FieldInfo field, ref T obj, object value) { if (typeof(T).IsValueType) { field.SetValueDirect(__makeref(obj), value); // For value type. } else { field.SetValue(obj, value); // For reference type. } } } Here __makeref is a undocumented keyword of C#. But method invocation has problem. This is the source code of TryInvokeMember(): public override bool TryInvokeMember(InvokeMemberBinder binder, object[] args, out object result) { if (binder == null) { throw new ArgumentNullException("binder"); } MethodInfo method = this._type.GetTypeMethod(binder.Name, args) ?? this._type.GetInterfaceMethod(binder.Name, args) ?? this._type.GetBaseMethod(binder.Name, args); if (method != null) { // Oops! // If the returnValue is a struct, it is copied to heap. object resultValue = method.Invoke(this._value, args); // And result is a wrapper of that copied struct. result = new DynamicWrapper<object>(ref resultValue); return true; } result = null; return false; } If the returned value is of value type, it will definitely copied, because MethodInfo.Invoke() does return object. If changing the value of the result, the copied struct is changed instead of the original struct. And so is the property and index accessing. They are both actually method invocation. For less confusion, setting property and index are not allowed on struct. Conclusions The DynamicWrapper<T> provides a simplified solution for reflection programming. It works for normal classes (reference types), accessing both instance and static members. In most of the scenarios, just remember to invoke ToDynamic() method, and access whatever you want: StaticType result = someValue.ToDynamic()._field.Method().Property[index]; In some special scenarios which requires changing the value of a struct (value type), this DynamicWrapper<T> does not work perfectly. Only changing struct’s field value is supported. The source code can be downloaded from here, including a few unit test code.

    Read the article

  • Set up DNS or F5 VIP to send traffic to a specific port

    - by Sam
    I have a clustered SQL instance set up at SERVER01\dev08 It's assigned to a static port of 1466. Can I set up something which will let users connect to SERVER01 and hit that port? If this is possible, what problems might it create (all traffic coming to this name hitting only one port)? It seems that DNS has nothing to do with ports - nor does the F5 big IP.

    Read the article

  • pasenger does not start puppet master under nginx

    - by Anadi Misra
    On the server [root@bangvmpllDA02 logs]# ruby -v ruby 1.8.7 (2011-06-30 patchlevel 352) [x86_64-linux] [root@bangvmpllDA02 logs]# puppet --version 3.0.1 and [root@bangvmpllDA02 logs]# service nginx configtest nginx: the configuration file /apps/nginx/nginx.conf syntax is ok nginx: configuration file /apps/nginx/nginx.conf test is successful [root@bangvmpllDA02 logs]# service nginx status nginx (pid 25923 25921 25920 25917 25908) is running... [root@bangvmpllDA02 logs]# however none of my agents are able to connect to the master, they all fail with errors like so [amisr1@blramisr195602 ~]$ puppet agent --test --verbose --server bangvmpllda02.XXX.com Info: Creating a new SSL certificate request for blramisr195602.XXX.com Info: Certificate Request fingerprint (SHA256): 26:EB:08:1F:82:32:E4:03:7A:64:8E:30:A3:99:93:26:E6:66:B9:B0:49:B6:08:F9:67:CA:1B:0C:00:B9:1D:41 Error: Could not request certificate: Error 405 on SERVER: <html> <head><title>405 Not Allowed</title></head> <body bgcolor="white"> <center><h1>405 Not Allowed</h1></center> <hr><center>nginx</center> </body> </html> Exiting; failed to retrieve certificate and waitforcert is disabled when I check logs on puppet master [root@bangvmpllDA02 logs]# tail puppet_access.log [05/Dec/2012:17:45:18 +0530] "GET /production/certificate/ca? HTTP/1.1" 404 162 "-" "Ruby" [05/Dec/2012:18:32:23 +0530] "PUT /production/certificate_request/sl63anadi.XXX.com HTTP/1.1" 405 166 "-" "-" [05/Dec/2012:18:33:33 +0530] "GET /production/certificate/sl63anadi.XXX.com? HTTP/1.1" 404 162 "-" "-" [05/Dec/2012:18:33:33 +0530] "GET /production/certificate_request/sl63anadi.XXX.com? HTTP/1.1" 404 162 "-" "-" [05/Dec/2012:18:33:33 +0530] "PUT /production/certificate_request/sl63anadi.XXX.com HTTP/1.1" 405 166 "-" "-" and the error logs show that nginx is not really able to process the request well 2012/12/05 18:33:33 [error] 25920#0: *23 open() "/etc/puppet/rack/public/production/certificate/sl63anadi.XXX.com" failed (2: No such file or directory), client: 10.209.47.26, server: , request: "GET /production/certificate/sl63anadi.XXX.com? HTTP/1.1", host: "bangvmpllda02.XXX.com:8140" 2012/12/05 18:33:33 [error] 25920#0: *24 open() "/etc/puppet/rack/public/production/certificate_request/sl63anadi.XXX.com" failed (2: No such file or directory), client: 10.209.47.26, server: , request: "GET /production/certificate_request/sl63anadi.XXX.com? HTTP/1.1", host: "bangvmpllda02.XXX.com:8140" 2012/12/05 18:47:56 [error] 25923#0: *27 open() "/etc/puppet/rack/public/production/certificate/ca" failed (2: No such file or directory), client: 10.209.47.31, server: , request: "GET /production/certificate/ca? HTTP/1.1", host: "bangvmpllda02.XXX.com:8140" 2012/12/05 18:47:56 [error] 25923#0: *28 open() "/etc/puppet/rack/public/production/certificate_request/blramisr195602.XXX.com" failed (2: No such file or directory), client: 10.209.47.31, server: , request: "GET /production/certificate_request/blramisr195602.XXX.com? HTTP/1.1", host: "bangvmpllda02.XXX.com:8140" Passenger does not show any application groups either [root@bangvmpllDA02 nginx]# passenger-status ----------- General information ----------- max = 15 count = 0 active = 0 inactive = 0 Waiting on global queue: 0 ----------- Application groups ----------- [root@bangvmpllDA02 nginx]# here's my nginx configuration [root@bangvmpllDA02 logs]# cat ../nginx.conf user puppet; worker_processes 4; #error_log logs/error.log; #error_log logs/error.log notice; error_log logs/error.log info; #pid logs/nginx.pid; events { use epoll; worker_connections 1024; } http { include mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log logs/access.log main; sendfile on; #tcp_nopush on; server_tokens off; #keepalive_timeout 0; keepalive_timeout 120; gzip on; gzip_http_version 1.1; gzip_disable "msie6"; gzip_vary on; gzip_min_length 1100; gzip_buffers 64 8k; gzip_comp_level 3; gzip_proxied any; gzip_types text/plain text/css application/x-javascript text/xml application/xml; server { listen 80; server_name bangvmpllda02.XXXX.com; charset utf-8; #access_log logs/http.access.log main; location / { root html; index index.html index.htm index.php; } #error_page 404 /404.html; # redirect server error pages to the static page /50x.html # error_page 500 502 503 504 /50x.html; location = /50x.html { root html; } # proxy the PHP scripts to Apache listening on 127.0.0.1:80 # #location ~ \.php$ { # proxy_pass http://127.0.0.1; #} # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 # location ~ \.php$ { root html; fastcgi_pass unix:/var/run/php-fpm/php-fpm.sock; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_param SCRIPT_NAME $fastcgi_script_name; include fastcgi_params; } # deny access to .htaccess files, if Apache's document root # concurs with nginx's one # location ~ /\.ht { access_log off; log_not_found off; deny all; } location ~* \.(jpg|jpeg|gif|png|css|js|ico|xml)$ { access_log off; log_not_found off; expires 2d; } } # Passenger needed for puppet passenger_root /usr/lib/ruby/gems/1.8/gems/passenger-3.0.18; passenger_ruby /usr/bin/ruby; passenger_max_pool_size 15; server { ssl on; listen 8140 default ssl; server_name bangvmpllda02.XXXX.com; passenger_enabled on; passenger_set_cgi_param HTTP_X_CLIENT_DN $ssl_client_s_dn; passenger_set_cgi_param HTTP_X_CLIENT_VERIFY $ssl_client_verify; passenger_min_instances 5; access_log logs/puppet_access.log; error_log logs/puppet_error.log; root /etc/puppet/rack/public; ssl_certificate /var/lib/puppet/ssl/certs/bangvmpllda02.XXX.com.pem; ssl_certificate_key /var/lib/puppet/ssl/private_keys/bangvmpllda02.XXX.com.pem; ssl_crl /var/lib/puppet/ssl/ca/ca_crl.pem; ssl_client_certificate /var/lib/puppet/ssl/certs/ca.pem; ssl_ciphers SSLv2:-LOW:-EXPORT:RC4+RSA; ssl_prefer_server_ciphers on; ssl_verify_client optional; ssl_verify_depth 1; ssl_session_cache shared:SSL:128m; ssl_session_timeout 5m; } } and the puppet.conf [main] # The Puppet log directory. # The default value is '$vardir/log'. logdir = /var/log/puppet # Where Puppet PID files are kept. # The default value is '$vardir/run'. rundir = /var/run/puppet dns_alt_names = devops.XXXX.com,devops confdir = /etc/puppet vardir = /var/lib/puppet storeconfigs = true storeconfigs_backend = puppetdb thin_storeconfigs = false async_storeconfigs = false ssl_client_header = SSL_CLIENT_S_D ssl_client_verify_header = SSL_CLIENT_VERIFY # Where SSL certificates are kept. # The default value is '$confdir/ssl'. ssldir = $vardir/ssl any ideas where am I going wrong? I checkthe directory permissions; /usr/share/puppet, /etc/puppet and /var/lib/puppet (and files inside them) are owned by puppet user.

    Read the article

  • How to split an m4v file?

    - by Chris
    I downloaded a TV season on iTunes (.m4v files), however the files are clustered into 3 episodes each. I'd like to chop these up so that each episode is in it's own file. Googling around a while didn't provide any promising leads. What's the easiest way to split these files up? Thanks in advance, Cheers!

    Read the article

  • Lighttpd not cleanly restarting (address already in use)

    - by NilObject
    When doing a dist-upgrade recently, my lighttpd-1.4.19 install on Ubuntu 8.0.4 has begun failing to restart or reload properly with the /etc/init.d/lighttpd restart command. ~$ sudo /etc/init.d/lighttpd restart * Stopping web server lighttpd ...done. * Starting web server lighttpd 2009-06-13 04:06:36: (network.c.300) can't bind to port: 80 Address already in use ...fail! The same error occurs when I do a reload. The way I get around it is to kill lighttpd and then issue the start command, but it seems like I shouldn't have to do that :) I've looked at my config files, and can't spot any immediate errors. Does anyone have any ideas what can be causing this error? This seems to be the latest version as of writing this question that is available via the apt-get route. My config file is: # Debian lighttpd configuration file # ############ Options you really have to take care of #################### ## modules to load # mod_access, mod_accesslog and mod_alias are loaded by default # all other module should only be loaded if neccesary # - saves some time # - saves memory server.modules = ( "mod_access", "mod_alias", "mod_accesslog", "mod_compress", "mod_fastcgi", "mod_rewrite", "mod_redirect", ) ## a static document-root, for virtual-hosting take look at the ## server.virtual-* options server.document-root = "/var/www/" ## where to send error-messages to server.errorlog = "/var/log/lighttpd/error.log" fastcgi.server = (".php" => (( "bin-path" => "/usr/bin/php5-cgi", "socket" => "/tmp/php.socket" ))) ## files to check for if .../ is requested index-file.names = ( "index.php", "index.html", "index.htm", "default.htm", "index.lighttpd.html" ) ## Use the "Content-Type" extended attribute to obtain mime type if possible # mimetype.use-xattr = "enable" #### accesslog module accesslog.filename = "/var/log/lighttpd/access.log" ## deny access the file-extensions # # ~ is for backupfiles from vi, emacs, joe, ... # .inc is often used for code includes which should in general not be part # of the document-root url.access-deny = ( "~", ".inc" ) ## # which extensions should not be handle via static-file transfer # # .php, .pl, .fcgi are most often handled by mod_fastcgi or mod_cgi static-file.exclude-extensions = ( ".php", ".pl", ".fcgi" ) mimetype.assign = ( ".pdf" => "application/pdf", ".sig" => "application/pgp-signature", ".spl" => "application/futuresplash", ".class" => "application/octet-stream", ".ps" => "application/postscript", ".torrent" => "application/x-bittorrent", ".dvi" => "application/x-dvi", ".gz" => "application/x-gzip", ".pac" => "application/x-ns-proxy-autoconfig", ".swf" => "application/x-shockwave-flash", ".tar.gz" => "application/x-tgz", ".tgz" => "application/x-tgz", ".tar" => "application/x-tar", ".zip" => "application/zip", ".mp3" => "audio/mpeg", ".m3u" => "audio/x-mpegurl", ".wma" => "audio/x-ms-wma", ".wax" => "audio/x-ms-wax", ".ogg" => "audio/x-wav", ".wav" => "audio/x-wav", ".gif" => "image/gif", ".jpg" => "image/jpeg", ".jpeg" => "image/jpeg", ".png" => "image/png", ".xbm" => "image/x-xbitmap", ".xpm" => "image/x-xpixmap", ".xwd" => "image/x-xwindowdump", ".css" => "text/css", ".html" => "text/html", ".htm" => "text/html", ".js" => "text/javascript", ".asc" => "text/plain", ".c" => "text/plain", ".conf" => "text/plain", ".text" => "text/plain", ".txt" => "text/plain", ".dtd" => "text/xml", ".xml" => "text/xml", ".rss" => "application/rss+xml", ".mpeg" => "video/mpeg", ".mpg" => "video/mpeg", ".mov" => "video/quicktime", ".qt" => "video/quicktime", ".avi" => "video/x-msvideo", ".asf" => "video/x-ms-asf", ".asx" => "video/x-ms-asf", ".wmv" => "video/x-ms-wmv", ".bz2" => "application/x-bzip", ".tbz" => "application/x-bzip-compressed-tar", ".tar.bz2" => "application/x-bzip-compressed-tar" ) include_shell "/usr/share/lighttpd/include-conf-enabled.pl" My /etc/init.d/lighttpd script is (untouched from installation): #!/bin/sh ### BEGIN INIT INFO # Provides: lighttpd # Required-Start: networking # Required-Stop: networking # Default-Start: 2 3 4 5 # Default-Stop: 0 1 6 # Short-Description: Start the lighttpd web server. ### END INIT INFO PATH=/sbin:/bin:/usr/sbin:/usr/bin DAEMON=/usr/sbin/lighttpd NAME=lighttpd DESC="web server" PIDFILE=/var/run/$NAME.pid SCRIPTNAME=/etc/init.d/$NAME ENV="env -i LANG=C PATH=/usr/local/bin:/usr/bin:/bin" SSD="/sbin/start-stop-daemon" DAEMON_OPTS="-f /etc/lighttpd/lighttpd.conf" test -x $DAEMON || exit 0 set -e # be sure there is a /var/run/lighttpd, even with tmpfs mkdir -p /var/run/lighttpd > /dev/null 2> /dev/null chown www-data:www-data /var/run/lighttpd chmod 0750 /var/run/lighttpd . /lib/lsb/init-functions case "$1" in start) log_daemon_msg "Starting $DESC" $NAME if ! $ENV $SSD --start --quiet\ --pidfile $PIDFILE --exec $DAEMON -- $DAEMON_OPTS ; then log_end_msg 1 else log_end_msg 0 fi ;; stop) log_daemon_msg "Stopping $DESC" $NAME if $SSD --quiet --stop --oknodo --retry 30\ --pidfile $PIDFILE --exec $DAEMON; then rm -f $PIDFILE log_end_msg 0 else log_end_msg 1 fi ;; reload) log_daemon_msg "Reloading $DESC configuration" $NAME if $SSD --stop --signal 2 --oknodo --retry 30\ --quiet --pidfile $PIDFILE --exec $DAEMON; then if $ENV $SSD --start --quiet \ --pidfile $PIDFILE --exec $DAEMON -- $DAEMON_OPTS ; then log_end_msg 0 else log_end_msg 1 fi else log_end_msg 1 fi ;; restart|force-reload) $0 stop [ -r $PIDFILE ] && while pidof lighttpd |\ grep -q `cat $PIDFILE 2>/dev/null` 2>/dev/null ; do sleep 1; done $0 start ;; *) echo "Usage: $SCRIPTNAME {start|stop|restart|reload|force-reload}" >&2 exit 1 ;; esac exit 0

    Read the article

  • How do you manage private queue permissions on a windows server 2008 cluster?

    - by David
    On windows server 2003 there was an option to allow a resource to interact with the desktop, this allowed you to run computer management mmc snap-in on the virtual name of the cluster, allowing you to manage permissions of the private message queues on the cluster. Windows Server 2008 failover clustering has removed that checkbox, so applications can no longer interact with the desktop. My question is then how does one go about managing private queue permissions on the clustered (the virtual name) server?

    Read the article

  • solved: passenger(mod_rails) fails to start puppet master under nginx

    - by Anadi Misra
    On the server [root@bangvmpllDA02 logs]# ruby -v ruby 1.8.7 (2011-06-30 patchlevel 352) [x86_64-linux] [root@bangvmpllDA02 logs]# puppet --version 3.0.1 and [root@bangvmpllDA02 logs]# service nginx configtest nginx: the configuration file /apps/nginx/nginx.conf syntax is ok nginx: configuration file /apps/nginx/nginx.conf test is successful [root@bangvmpllDA02 logs]# service nginx status nginx (pid 25923 25921 25920 25917 25908) is running... [root@bangvmpllDA02 logs]# however none of my agents are able to connect to the master, they all fail with errors like so [amisr1@blramisr195602 ~]$ puppet agent --test --verbose --server bangvmpllda02.XXX.com Info: Creating a new SSL certificate request for blramisr195602.XXX.com Info: Certificate Request fingerprint (SHA256): 26:EB:08:1F:82:32:E4:03:7A:64:8E:30:A3:99:93:26:E6:66:B9:B0:49:B6:08:F9:67:CA:1B:0C:00:B9:1D:41 Error: Could not request certificate: Error 405 on SERVER: <html> <head><title>405 Not Allowed</title></head> <body bgcolor="white"> <center><h1>405 Not Allowed</h1></center> <hr><center>nginx</center> </body> </html> Exiting; failed to retrieve certificate and waitforcert is disabled when I check logs on puppet master [root@bangvmpllDA02 logs]# tail puppet_access.log [05/Dec/2012:17:45:18 +0530] "GET /production/certificate/ca? HTTP/1.1" 404 162 "-" "Ruby" [05/Dec/2012:18:32:23 +0530] "PUT /production/certificate_request/sl63anadi.XXX.com HTTP/1.1" 405 166 "-" "-" [05/Dec/2012:18:33:33 +0530] "GET /production/certificate/sl63anadi.XXX.com? HTTP/1.1" 404 162 "-" "-" [05/Dec/2012:18:33:33 +0530] "GET /production/certificate_request/sl63anadi.XXX.com? HTTP/1.1" 404 162 "-" "-" [05/Dec/2012:18:33:33 +0530] "PUT /production/certificate_request/sl63anadi.XXX.com HTTP/1.1" 405 166 "-" "-" and the error logs show that nginx is not really able to process the request well 2012/12/05 18:33:33 [error] 25920#0: *23 open() "/etc/puppet/rack/public/production/certificate/sl63anadi.XXX.com" failed (2: No such file or directory), client: 10.209.47.26, server: , request: "GET /production/certificate/sl63anadi.XXX.com? HTTP/1.1", host: "bangvmpllda02.XXX.com:8140" 2012/12/05 18:33:33 [error] 25920#0: *24 open() "/etc/puppet/rack/public/production/certificate_request/sl63anadi.XXX.com" failed (2: No such file or directory), client: 10.209.47.26, server: , request: "GET /production/certificate_request/sl63anadi.XXX.com? HTTP/1.1", host: "bangvmpllda02.XXX.com:8140" 2012/12/05 18:47:56 [error] 25923#0: *27 open() "/etc/puppet/rack/public/production/certificate/ca" failed (2: No such file or directory), client: 10.209.47.31, server: , request: "GET /production/certificate/ca? HTTP/1.1", host: "bangvmpllda02.XXX.com:8140" 2012/12/05 18:47:56 [error] 25923#0: *28 open() "/etc/puppet/rack/public/production/certificate_request/blramisr195602.XXX.com" failed (2: No such file or directory), client: 10.209.47.31, server: , request: "GET /production/certificate_request/blramisr195602.XXX.com? HTTP/1.1", host: "bangvmpllda02.XXX.com:8140" Passenger does not show any application groups either [root@bangvmpllDA02 nginx]# passenger-status ----------- General information ----------- max = 15 count = 0 active = 0 inactive = 0 Waiting on global queue: 0 ----------- Application groups ----------- [root@bangvmpllDA02 nginx]# here's my nginx configuration [root@bangvmpllDA02 logs]# cat ../nginx.conf user puppet; worker_processes 4; #error_log logs/error.log; #error_log logs/error.log notice; error_log logs/error.log info; #pid logs/nginx.pid; events { use epoll; worker_connections 1024; } http { include mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log logs/access.log main; sendfile on; #tcp_nopush on; server_tokens off; #keepalive_timeout 0; keepalive_timeout 120; gzip on; gzip_http_version 1.1; gzip_disable "msie6"; gzip_vary on; gzip_min_length 1100; gzip_buffers 64 8k; gzip_comp_level 3; gzip_proxied any; gzip_types text/plain text/css application/x-javascript text/xml application/xml; server { listen 80; server_name bangvmpllda02.XXXX.com; charset utf-8; #access_log logs/http.access.log main; location / { root html; index index.html index.htm index.php; } #error_page 404 /404.html; # redirect server error pages to the static page /50x.html # error_page 500 502 503 504 /50x.html; location = /50x.html { root html; } # proxy the PHP scripts to Apache listening on 127.0.0.1:80 # #location ~ \.php$ { # proxy_pass http://127.0.0.1; #} # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 # location ~ \.php$ { root html; fastcgi_pass unix:/var/run/php-fpm/php-fpm.sock; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_param SCRIPT_NAME $fastcgi_script_name; include fastcgi_params; } # deny access to .htaccess files, if Apache's document root # concurs with nginx's one # location ~ /\.ht { access_log off; log_not_found off; deny all; } location ~* \.(jpg|jpeg|gif|png|css|js|ico|xml)$ { access_log off; log_not_found off; expires 2d; } } # Passenger needed for puppet passenger_root /usr/lib/ruby/gems/1.8/gems/passenger-3.0.18; passenger_ruby /usr/bin/ruby; passenger_max_pool_size 15; server { ssl on; listen 8140 default ssl; server_name bangvmpllda02.XXXX.com; passenger_enabled on; passenger_set_cgi_param HTTP_X_CLIENT_DN $ssl_client_s_dn; passenger_set_cgi_param HTTP_X_CLIENT_VERIFY $ssl_client_verify; passenger_min_instances 5; access_log logs/puppet_access.log; error_log logs/puppet_error.log; root /etc/puppet/rack/public; ssl_certificate /var/lib/puppet/ssl/certs/bangvmpllda02.XXX.com.pem; ssl_certificate_key /var/lib/puppet/ssl/private_keys/bangvmpllda02.XXX.com.pem; ssl_crl /var/lib/puppet/ssl/ca/ca_crl.pem; ssl_client_certificate /var/lib/puppet/ssl/certs/ca.pem; ssl_ciphers SSLv2:-LOW:-EXPORT:RC4+RSA; ssl_prefer_server_ciphers on; ssl_verify_client optional; ssl_verify_depth 1; ssl_session_cache shared:SSL:128m; ssl_session_timeout 5m; } } and the puppet.conf [main] # The Puppet log directory. # The default value is '$vardir/log'. logdir = /var/log/puppet # Where Puppet PID files are kept. # The default value is '$vardir/run'. rundir = /var/run/puppet dns_alt_names = devops.XXXX.com,devops confdir = /etc/puppet vardir = /var/lib/puppet storeconfigs = true storeconfigs_backend = puppetdb thin_storeconfigs = false async_storeconfigs = false ssl_client_header = SSL_CLIENT_S_D ssl_client_verify_header = SSL_CLIENT_VERIFY # Where SSL certificates are kept. # The default value is '$confdir/ssl'. ssldir = $vardir/ssl any ideas where am I going wrong? I checkthe directory permissions; /usr/share/puppet, /etc/puppet and /var/lib/puppet (and files inside them) are owned by puppet user. Solved The simple solution to my complicated problem was that I had placed the config.ru in wrong place moved it to /etc/puppet/rack , it was in /etc/puppet/rack/public Well!!! :-/

    Read the article

  • Unable to uninstall SQL 2008 Instance(s)

    - by ichoudhury
    Windows 2008 R2 High Availability Cluster and we were just going through the first phase of configuration. Somebody accidentally loaded the instance incorrectly, so I was hopping to uninstall and reinstall. But when I approach the uninstall process, it fails with the following msg: Object reference not set to an instance of an object SQL instance not yet clustered (FYI) Any idea?

    Read the article

  • Where can I go to learn how to read a sql server execution plan?

    - by Chris Lively
    I'm looking for resources that can teach me how to properly read a sql server execution plan. I'm a long time developer, with tons of sql server experience, but I've never really learned how to really understand what an execution plan is saying to me. I guess I'm looking for links, books, anything that can describe things like whether a clustered index scan is good or bad along with examples on how to fix issues.

    Read the article

  • Hyper-V SERVER R2 Cluster

    - by ToreTrygg
    I keep hearing and reading that Hyper-V Server 2008 R2 can be clustered, but I can't find out how to set this up. I can find how to cluster Windows Server 2008 R2 with Hyper-V. I understand how to do this, bBut "How do you set up Hyper-V SERVER 2008 R2 in a cluster"? Thanks.

    Read the article

  • Serving a video and audio upload and streaming intense site

    - by Pollux Khafra
    I'm about to launch a new site that allows user to both upload/stream audio and video and I don't know anything about the server side of things. My original plan was to just use a dedicated server through Hostgator but from what I'm reading, Cloud hosting or Load balanced clustered is the best way to go for what Im trying to do. All the articles seem to have an agenda to sell you on an affiliate web host so how do I really need to do this?

    Read the article

  • Need to reformat SQL Cluster Disk. How do I recover my SQL installation?

    - by I.T. Support
    We need to reformat the SQL cluster disk in our SQL cluster. The drive contains the shared installation files for SQL as well as databases. My concern is how SQL/The Cluster will react to after we wipe the disk resource. Questions: Is there a defined procedure for this? How should we backup and restore the disk? After the reformat, how do we get the clustered SQL server back online? Thanks

    Read the article

  • Is it possible that an insert would use parallelism?

    - by Lieven Cardoen
    In what situation can inserting a simple record (the table has got some 10 columns, two of them are xml) use parallelism? CREATE TABLE [dbo].[PackageSessions]( [PackageSessionId] [int] IDENTITY(1,1) NOT NULL, [PackageId] [int] NOT NULL, [UserId] [int] NOT NULL, [StartDateTime] [datetime] NULL, [StopDateTime] [datetime] NULL, [Score] [float] NOT NULL, [ScoreMax] [float] NOT NULL, [CompletionStatus] [int] NOT NULL, [ReviewPlayerContextId] [int] NOT NULL, [PlayerContextId] [int] NOT NULL, [ReducedScore] [float] NOT NULL, [ReducedScoreMax] [float] NOT NULL, [PackageSnapShot] [xml] NULL, [InterfaceLanguageId] [int] NOT NULL, [Data] [xml] NULL, [PackageSessionLanguageId] [int] NOT NULL, CONSTRAINT [PackageSessions_PK] PRIMARY KEY CLUSTERED ( [PackageSessionId] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] ) ON [PRIMARY]

    Read the article

  • ASP.NET MVC 3 Hosting :: New Features in ASP.NET MVC 3

    - by mbridge
    Razor View Engine The Razor view engine is a new view engine option for ASP.NET MVC that supports the Razor templating syntax. The Razor syntax is a streamlined approach to HTML templating designed with the goal of being a code driven minimalist templating approach that builds on existing C#, VB.NET and HTML knowledge. The result of this approach is that Razor views are very lean and do not contain unnecessary constructs that get in the way of you and your code. ASP.NET MVC 3 Preview 1 only supports C# Razor views which use the .cshtml file extension. VB.NET support will be enabled in later releases of ASP.NET MVC 3. For more information and examples, see Introducing “Razor” – a new view engine for ASP.NET on Scott Guthrie’s blog. Dynamic View and ViewModel Properties A new dynamic View property is available in views, which provides access to the ViewData object using a simpler syntax. For example, imagine two items are added to the ViewData dictionary in the Index controller action using code like the following: public ActionResult Index() {          ViewData["Title"] = "The Title";          ViewData["Message"] = "Hello World!"; } Those properties can be accessed in the Index view using code like this: <h2>View.Title</h2> <p>View.Message</p> There is also a new dynamic ViewModel property in the Controller class that lets you add items to the ViewData dictionary using a simpler syntax. Using the previous controller example, the two values added to the ViewData dictionary can be rewritten using the following code: public ActionResult Index() {     ViewModel.Title = "The Title";     ViewModel.Message = "Hello World!"; } “Add View” Dialog Box Supports Multiple View Engines The Add View dialog box in Visual Studio includes extensibility hooks that allow it to support multiple view engines, as shown in the following figure: Service Location and Dependency Injection Support ASP.NET MVC 3 introduces improved support for applying Dependency Injection (DI) via Inversion of Control (IoC) containers. ASP.NET MVC 3 Preview 1 provides the following hooks for locating services and injecting dependencies: - Creating controller factories. - Creating controllers and setting dependencies. - Setting dependencies on view pages for both the Web Form view engine and the Razor view engine (for types that derive from ViewPage, ViewUserControl, ViewMasterPage, WebViewPage). - Setting dependencies on action filters. Using a Dependency Injection container is not required in order for ASP.NET MVC 3 to function properly. Global Filters ASP.NET MVC 3 allows you to register filters that apply globally to all controller action methods. Adding a filter to the global filters collection ensures that the filter runs for all controller requests. To register an action filter globally, you can make the following call in the Application_Start method in the Global.asax file: GlobalFilters.Filters.Add(new MyActionFilter()); The source of global action filters is abstracted by the new IFilterProvider interface, which can be registered manually or by using Dependency Injection. This allows you to provide your own source of action filters and choose at run time whether to apply a filter to an action in a particular request. New JsonValueProviderFactory Class The new JsonValueProviderFactory class allows action methods to receive JSON-encoded data and model-bind it to an action-method parameter. This is useful in scenarios such as client templating. Client templates enable you to format and display a single data item or set of data items by using a fragment of HTML. ASP.NET MVC 3 lets you connect client templates easily with an action method that both returns and receives JSON data. Support for .NET Framework 4 Validation Attributes and IvalidatableObject The ValidationAttribute class was improved in the .NET Framework 4 to enable richer support for validation. When you write a custom validation attribute, you can use a new IsValid overload that provides a ValidationContext instance. This instance provides information about the current validation context, such as what object is being validated. This change enables scenarios such as validating the current value based on another property of the model. The following example shows a sample custom attribute that ensures that the value of PropertyOne is always larger than the value of PropertyTwo: public class CompareValidationAttribute : ValidationAttribute {     protected override ValidationResult IsValid(object value,              ValidationContext validationContext) {         var model = validationContext.ObjectInstance as SomeModel;         if (model.PropertyOne > model.PropertyTwo) {            return ValidationResult.Success;         }         return new ValidationResult("PropertyOne must be larger than PropertyTwo");     } } Validation in ASP.NET MVC also supports the .NET Framework 4 IValidatableObject interface. This interface allows your model to perform model-level validation, as in the following example: public class SomeModel : IValidatableObject {     public int PropertyOne { get; set; }     public int PropertyTwo { get; set; }     public IEnumerable<ValidationResult> Validate(ValidationContext validationContext) {         if (PropertyOne <= PropertyTwo) {            yield return new ValidationResult(                "PropertyOne must be larger than PropertyTwo");         }     } } New IClientValidatable Interface The new IClientValidatable interface allows the validation framework to discover at run time whether a validator has support for client validation. This interface is designed to be independent of the underlying implementation; therefore, where you implement the interface depends on the validation framework in use. For example, for the default data annotations-based validator, the interface would be applied on the validation attribute. Support for .NET Framework 4 Metadata Attributes ASP.NET MVC 3 now supports .NET Framework 4 metadata attributes such as DisplayAttribute. New IMetadataAware Interface The new IMetadataAware interface allows you to write attributes that simplify how you can contribute to the ModelMetadata creation process. Before this interface was available, you needed to write a custom metadata provider in order to have an attribute provide extra metadata. This interface is consumed by the AssociatedMetadataProvider class, so support for the IMetadataAware interface is automatically inherited by all classes that derive from that class (notably, the DataAnnotationsModelMetadataProvider class). New Action Result Types In ASP.NET MVC 3, the Controller class includes two new action result types and corresponding helper methods. HttpNotFoundResult Action The new HttpNotFoundResult action result is used to indicate that a resource requested by the current URL was not found. The status code is 404. This class derives from HttpStatusCodeResult. The Controller class includes an HttpNotFound method that returns an instance of this action result type, as shown in the following example: public ActionResult List(int id) {     if (id < 0) {                 return HttpNotFound();     }     return View(); } HttpStatusCodeResult Action The new HttpStatusCodeResult action result is used to set the response status code and description. Permanent Redirect The HttpRedirectResult class has a new Boolean Permanent property that is used to indicate whether a permanent redirect should occur. A permanent redirect uses the HTTP 301 status code. Corresponding to this change, the Controller class now has several methods for performing permanent redirects: - RedirectPermanent - RedirectToRoutePermanent - RedirectToActionPermanent These methods return an instance of HttpRedirectResult with the Permanent property set to true. Breaking Changes The order of execution for exception filters has changed for exception filters that have the same Order value. In ASP.NET MVC 2 and earlier, exception filters on the controller with the same Order as those on an action method were executed before the exception filters on the action method. This would typically be the case when exception filters were applied without a specified order Order value. In MVC 3, this order has been reversed in order to allow the most specific exception handler to execute first. As in earlier versions, if the Order property is explicitly specified, the filters are run in the specified order. Known Issues When you are editing a Razor view (CSHTML file), the Go To Controller menu item in Visual Studio will not be available, and there are no code snippets.

    Read the article

< Previous Page | 156 157 158 159 160 161 162 163 164 165 166 167  | Next Page >