Search Results

Search found 3760 results on 151 pages for 'mutlple entries'.

Page 113/151 | < Previous Page | 109 110 111 112 113 114 115 116 117 118 119 120  | Next Page >

  • What can cause the system to freeze in a way where even the reset button takes a long time to react?

    - by ThiefMaster
    What can be the reason for system freezes that are so "hard" that even the hardware reset button takes about 3 seconds until it actually resets the system (and then it actually powers down and up again instead of doing a "clean" hard reset like when pressing it during a normally running system). Since it initially happened mainly while playing videos from YouTube I suspected the graphics card - however, I replaced it recently and it did not change it. It still happens from time to time (and sometimes more often, like a few times times in the last few hours). The system is running Windows 7 - but I don't think this matters since I don't think any software, not even the OS, can actually affect the reset button's behaviour. The PC is not overheated and the freezes happen randomly. There is also no malware on the system. The CPU is an Intel Core i7-920 on a Gigabyte EX58-UD5 mainboard. What could be the cause for this problem? Faulty RAM? I did not run a full memtest86 check yet, but I wonder if there is a more likely issue than faulty RAM - checking 12G of ram does take some time after all! There are no entries in the event log - but that's what I expected since the system freezes so hard that I doubt it has time to write anything to any log.

    Read the article

  • Grub Autostart with timeout

    - by BetaRide
    On Ubuntu 10.4 LTS I want Grub to start the default OS after 5 Seconds. I'd like to see the output of the startup scripts. Currently grub wait forever until I hit return and the output of the startup scripts isn't visible. Can someone tell me how I have to configure /etc/default/grub or any other setups? I tried to play with GRUB_TIMEOUT and GRUB_DEFAULT and did a sudo update-grub afterwards, but nothing changed. Any ideas? # If you change this file, run 'update-grub' afterwards to update # /boot/grub/grub.cfg. GRUB_DEFAULT=0 GRUB_HIDDEN_TIMEOUT=5 #GRUB_HIDDEN_TIMEOUT_QUIET=true GRUB_TIMEOUT=5 GRUB_DISTRIBUTOR=`lsb_release -i -s 2> /dev/null || echo Debian` GRUB_CMDLINE_LINUX_DEFAULT="quiet splash" GRUB_CMDLINE_LINUX="" GRUB_SAVEDEFAULT=true # Uncomment to disable graphical terminal (grub-pc only) #GRUB_TERMINAL=console # The resolution used on graphical terminal # note that you can use only modes which your graphic card supports via VBE # you can see them in real GRUB with the command `vbeinfo' #GRUB_GFXMODE=640x480 # Uncomment if you don't want GRUB to pass "root=UUID=xxx" parameter to Linux # GRUB_DISABLE_LINUX_UUID=true # Uncomment to disable generation of recovery mode menu entries #GRUB_DISABLE_LINUX_RECOVERY="true" # Uncomment to get a beep at grub start #GRUB_INIT_TUNE="480 440 1"

    Read the article

  • Windows 7 delayed file delete

    - by GregoryM
    I'm stuck with a pretty rare problem that happens on Windows 7 OS only. Every time I'm deleting the file with *.exe extension through explorer, the file doesn't get deleted immediately. I'm forced to wait for around one-two minutes before the system will delete the file. The main problem is that I cannot develop in such situation, because every time I build my solution, the old executable gets 'deleted', but is still there. So the new one cannot be created by Visual Studio. This problem breaks the Steam update progress and a few other installers functionality too. Fresh installed Win7 doesn't have this kind of trouble, so I guess this must be some bad registry entries or some services. Browsing the internet for solutions I found only this: http://www.sevenforums.com/software/72091-several-minute-delay-when-deleting-any-exe-file.html. But the solution the author found is not working (change the userName :)). Is there any ideas how to find what causes this to happen? BTW: when I place the file into Trash bin, no delay occurs. When I delete file with Total Commander - no delay too. Tech details: Windows 7 x64 Ultimate. UPD: maybe some shadow copying or system restore services (though I have the system restore turned off) block the files? Can't even guess...

    Read the article

  • Linux SFTP and many local user accounts, limits with mount --bind?

    - by user123428
    I am in the process of building a solution to handle many developers (possibly hundreds) to work on their files via sftp, each one Jailed in their home directory. For our particular needs, we have a samba mount point that contains all of the users home directories. I have started developing the following solution and hit some walls: - I have configured a Ubuntu Lucid Server as sftp server. - In order to jail the user in their home directory (without allowing them the browse a directory up and seeing all the other users folders) I am using mount --bind and not a symbolic link (also some ftp clients don't really work with sym links). - The user accounts are local unix user accounts on the sftp server (not using a directory service or anything) that have an empty home folder created on the local machine, then I use mount --bind to bind the empty folder to the actual users home directory on the samba share. With this solution I am hitting a couple of problems, in the case of a server reboot, all the mount --binds are lost because they are not written in fstab. Then I have read somewhere that the maximum amount of entries in fstab are 400 (which does not really help us). I have thought of a solution of writing something that stores the mounts in a text file as a backup and on server reboot, run the script that re mounts. I am just really unsure about this whole process and was wondering if anyone has any insight on possibly a better solution for SFTP? (not FTP)

    Read the article

  • Is is possible to guide installation of new programs using %ProgramFiles%? [closed]

    - by ??????? ???????????
    The purpose of this is to have the default "program files" (32 and 64 bit) folders located under an arbitrary path, possibly on a drive separate from where windows lives. Initially I thought that this may be done using a system environment variable through the dialog located under Control Panel - System - Advanced - Environment Variables. These variables turned out to be set in the registry under the key HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion. However, one particular entry is confusing. The ProgramFilesPath entry seems to point at an environment variable that is not defined under the same registry key. I could assume that the difference between ProgramFilesDir and ProgramFilesPath is none and that one of them exists as a backwards compatibility, but having some legitimate resource from Microsoft to look at would be better than guessing. After receiving some worrying feedback about having both 32 and 64bit applications in the same folder, I have decided not to ask about the feasibility of this to avoid discussion. The real question is if the desired effect is possible to attain by "cutting into" the windows setup process and modifying those registry entries as early as possible. These settings should be system wide and not only for software installed by a particular user. If this is indeed something that can be done, I wonder if there are any subtle pitfalls. Programs that expect libraries and other resources to be in default locations can probably be dealt with using the same technique as employed by Windows to re-map the "Documents and Settings" folders and the like (i.e. breaking legacy applications is not real concern).

    Read the article

  • Global Email Forwarding with EXIM?

    - by Dexirian
    Been trying to find a solution to this for a while without success so here i go : I was given the task to build a High-Availability Load-Balanced Network Cluster for our 2 linux servers. I did some workaround and managed to get a DNS + SQL + Web Folders + Mails synchronisation going between both. Now i would like my server 2 to only do mailing and server 1 to only do web hosting. I transfered all the accounts for 1 to 2 using the WHM built-in account transfert feature. I created 2 different rsync jobs that sync, update, and delete the files for mail and websites. Now i was able to successfully transfer 1 mail accounts from 1 to 2, and the server 2 works flawlessly. All i had to do was change the MX entries to point to the new server and bingo. Now my problem is, some clients have their mail softwares configured so that they point to oldserver.domain.com. I cant make the (A) entry of oldserver.domain.com point to the new server for obvious reasons. I thought of using .foward files and add them to the home directories of the concerned users but that would be very difficult. So my question is : Is there a way to configure exim so that it will only foward mails to the new server? I need to change all the users so they use their mail on server 2 without them doing anything. Thanks! EDIT : TO CLARIFY MY PROBLEM Some clients have their mail point to oldserver.xyz instead of mail.olderserver.xyz I want to know if i can do something to prevent modifying the clients configuration I would also like to know is there is a way to find out what clients aren't properly configured

    Read the article

  • Debian boot problems

    - by psp
    I've got Debian server with one disk. No dual boot or anything fancy. Just Debian 6.0 (Squeeze). I rebooted the server today and now it doesn't boot. I get the following (from GRUB): error: hd0,msdos out of disk I then get a grub prompt grub rescue> I've been googling for ages with no luck. /etc/fstab > #/etc/fstab: static file system information. > # > # <file system> <mount point> <type> <options> <dump> <pass> > aufs / aufs rw 0 0 > tmpfs /tmp tmpfs nosuid,nodev 0 0 I've run debian rescue mode and looked through the syslog. I see hundreds of entries like this: Jun 30 22:51:08 kernel: [ 615.217382] sd 2:0:0:0: [sda] Unhandled error code Jun 30 22:51:08 kernel: [ 615.217385] sd 2:0:0:0: [sda] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK Jun 30 22:51:08 kernel: [ 615.217389] sd 2:0:0:0: [sda] CDB: Read(10): 28 00 00 00 00 00 00 00 08 00 Jun 30 22:51:08 kernel: [ 615.217399] end_request: I/O error, dev sda, logical block 0 Jun 30 22:51:08 kernel: [ 615.217402] Buffer I/O error on device sda, logical block 0

    Read the article

  • File gone or altered after MySQL[HY000][2002] error [on hold]

    - by Psyberion
    I'm working on a rather small project, and today I got an SQLSTATE[HY000][2002]:Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' error. After a bit of googling and a few attempts to restart the mysqld service, I gave up and tried rebooting the computer. This did the trick, MySQL was now running fine. I did, however, get a more serious issue: Some files were missing, others were altered. Also, a few posts in the MySQL was gone. It's really strange, it's like the whole project has been reset two or three days, and I have no clue why. Some additional details about this: I save my files after every line of code. I'm religious about this. So I haven't lost the files that way. I was accessing the server via SSH when the error occurred, so I did the programming and the reboot over SSH. The server is a Raspberry Pi, model B, with Raspian on which I run Apache2. I was viewing the site and had an active session when I rebooted the system. The pages I lost did work just before this all happened. The MySQL fault occurred when I tried to add a text NOT NULL column to a table which had entries. Now, the amount of lost work isn't really that much, so I'm not really looking for help recovering the files. The reason I'm posting this is because I wonder how did this happen, and why?

    Read the article

  • Automating Solaris 11 Zones Installation Using The Automated Install Server

    - by Orgad Kimchi
    Introduction How to use the Oracle Solaris 11 Automated install server in order to automate the Solaris 11 Zones installation. In this document I will demonstrate how to setup the Automated Install server in order to provide hands off installation process for the Global Zone and two Non Global Zones located on the same system. Architecture layout: Figure 1. Architecture layout Prerequisite Setup the Automated install server (AI) using the following instructions “How to Set Up Automated Installation Services for Oracle Solaris 11” The first step in this setup will be creating two Solaris 11 Zones configuration files. Step 1: Create the Solaris 11 Zones configuration files  The Solaris Zones configuration files should be in the format of the zonecfg export command. # zonecfg -z zone1 export > /var/tmp/zone1# cat /var/tmp/zone1 create -b set brand=solaris set zonepath=/rpool/zones/zone1 set autoboot=true set ip-type=exclusive add anet set linkname=net0 set lower-link=auto set configure-allowed-address=true set link-protection=mac-nospoof set mac-address=random end  Create a backup copy of this file under a different name, for example, zone2. # cp /var/tmp/zone1 /var/tmp/zone2 Modify the second configuration file with the zone2 configuration information You should change the zonepath for example: set zonepath=/rpool/zones/zone2 Step2: Copy and share the Zones configuration files  Create the NFS directory for the Zones configuration files # mkdir /export/zone_config Share the directory for the Zones configuration file # share –o ro /export/zone_config Copy the Zones configuration files into the NFS shared directory # cp /var/tmp/zone1 /var/tmp/zone2  /export/zone_config Verify that the NFS share has been created using the following command # share export_zone_config      /export/zone_config     nfs     sec=sys,ro Step 3: Add the Global Zone as client to the Install Service Use the installadm create-client command to associate client (Global Zone) with the install service To find the MAC address of a system, use the dladm command as described in the dladm(1M) man page. The following command adds the client (Global Zone) with MAC address 0:14:4f:2:a:19 to the s11x86service install service. # installadm create-client -e “0:14:4f:2:a:19" -n s11x86service You can verify the client creation using the following command # installadm list –c Service Name  Client Address     Arch   Image Path ------------  --------------     ----   ---------- s11x86service 00:14:4F:02:0A:19  i386   /export/auto_install/s11x86service We can see the client install service name (s11x86service), MAC address (00:14:4F:02:0A:19 and Architecture (i386). Step 4: Global Zone manifest setup  First, get a list of the installation services and the manifests associated with them: # installadm list -m Service Name   Manifest        Status ------------   --------        ------ default-i386   orig_default   Default s11x86service  orig_default   Default Then probe the s11x86service and the default manifest associated with it. The -m switch reflects the name of the manifest associated with a service. Since we want to capture that output into a file, we redirect the output of the command as follows: # installadm export -n s11x86service -m orig_default >  /var/tmp/orig_default.xml Create a backup copy of this file under a different name, for example, orig-default2.xml, and edit the copy. # cp /var/tmp/orig_default.xml /var/tmp/orig_default2.xml Use the configuration element in the AI manifest for the client system to specify non-global zones. Use the name attribute of the configuration element to specify the name of the zone. Use the source attribute to specify the location of the config file for the zone.The source location can be any http:// or file:// location that the client can access during installation. The following sample AI manifest specifies two Non-Global Zones: zone1 and zone2 You should replace the server_ip with the ip address of the NFS server. <!DOCTYPE auto_install SYSTEM "file:///usr/share/install/ai.dtd.1"> <auto_install>   <ai_instance>     <target>       <logical>         <zpool name="rpool" is_root="true">           <filesystem name="export" mountpoint="/export"/>           <filesystem name="export/home"/>           <be name="solaris"/>         </zpool>       </logical>     </target>     <software type="IPS">       <source>         <publisher name="solaris">           <origin name="http://pkg.oracle.com/solaris/release"/>         </publisher>       </source>       <software_data action="install">         <name>pkg:/entire@latest</name>         <name>pkg:/group/system/solaris-large-server</name>       </software_data>     </software>     <configuration type="zone" name="zone1" source="file:///net/server_ip/export/zone_config/zone1"/>     <configuration type="zone" name="zone2" source="file:///net/server_ip/export/zone_config/zone2"/>   </ai_instance> </auto_install> The following example adds the /var/tmp/orig_default2.xml AI manifest to the s11x86service install service # installadm create-manifest -n s11x86service -f /var/tmp/orig_default2.xml -m gzmanifest You can verify the manifest creation using the following command # installadm list -n s11x86service  -m Service/Manifest Name  Status   Criteria ---------------------  ------   -------- s11x86service    orig_default        Default  None    gzmanifest          Inactive None We can see from the command output that the new manifest named gzmanifest has been created and associated with the s11x86service install service. Step 5: Non Global Zone manifest setup The AI manifest for non-global zone installation is similar to the AI manifest for installing the global zone. If you do not provide a custom AI manifest for a non-global zone, the default AI manifest for Zones is used The default AI manifest for Zones is available at /usr/share/auto_install/manifest/zone_default.xml. In this example we should use the default AI manifest for zones The following sample default AI manifest for zones # cat /usr/share/auto_install/manifest/zone_default.xml <?xml version="1.0" encoding="UTF-8"?> <!--  Copyright (c) 2011, 2012, Oracle and/or its affiliates. All rights reserved. --> <!DOCTYPE auto_install SYSTEM "file:///usr/share/install/ai.dtd.1"> <auto_install>     <ai_instance name="zone_default">         <target>             <logical>                 <zpool name="rpool">                     <!--                       Subsequent <filesystem> entries instruct an installer                       to create following ZFS datasets:                           <root_pool>/export         (mounted on /export)                           <root_pool>/export/home    (mounted on /export/home)                       Those datasets are part of standard environment                       and should be always created.                       In rare cases, if there is a need to deploy a zone                       without these datasets, either comment out or remove                       <filesystem> entries. In such scenario, it has to be also                       assured that in case of non-interactive post-install                       configuration, creation of initial user account is                       disabled in related system configuration profile.                       Otherwise the installed zone would fail to boot.                     -->                     <filesystem name="export" mountpoint="/export"/>                     <filesystem name="export/home"/>                     <be name="solaris">                         <options>                             <option name="compression" value="on"/>                         </options>                     </be>                 </zpool>             </logical>         </target>         <software type="IPS">             <destination>                              </destination>             <software_data action="install">                 <name>pkg:/group/system/solaris-small-server</name>             </software_data>         </software>     </ai_instance> </auto_install> (optional) We can customize the default AI manifest for Zones Create a backup copy of this file under a different name, for example, zone_default2.xml and edit the copy # cp /usr/share/auto_install/manifest/zone_default.xml /var/tmp/zone_default2.xml Edit the copy (/var/tmp/zone_default2.xml) The following example adds the /var/tmp/zone_default2.xml AI manifest to the s11x86service install service and specifies that zone1 and zone2 should use this manifest. # installadm create-manifest -n s11x86service -f /var/tmp/zone_default2.xml -m zones_manifest -c zonename="zone1 zone2" Note: Do not use the following elements or attributes in a non-global zone AI manifest:     The auto_reboot attribute of the ai_instance element     The http_proxy attribute of the ai_instance element     The disk child element of the target element     The noswap attribute of the logical element     The nodump attribute of the logical element     The configuration element Step 6: Global Zone profile setup We are going to create a global zone configuration profile which includes the host information for example: host name, ip address name services etc… # sysconfig create-profile –o /var/tmp/gz_profile.xml You need to provide the host information for example:     Default router     Root password     DNS information The output should eventually disappear and be replaced by the initial screen of the System Configuration Tool (see Figure 2), where you can do the final configuration. Figure 2. Profile creation menu You can validate the profile using the following command # installadm validate -n s11x86service –P /var/tmp/gz_profile.xml Validating static profile gz_profile.xml...  Passed Next, instantiate a profile with the install service. In our case, use the following syntax for doing this # installadm create-profile -n s11x86service  -f /var/tmp/gz_profile.xml -p  gz_profile You can verify profile creation using the following command # installadm list –n s11x86service  -p Service/Profile Name  Criteria --------------------  -------- s11x86service    gz_profile         None We can see that the gz_profie has been created and associated with the s11x86service Install service. Step 7: Setup the Solaris Zones configuration profiles The step should be similar to the Global zone profile creation on step 6 # sysconfig create-profile –o /var/tmp/zone1_profile.xml # sysconfig create-profile –o /var/tmp/zone2_profile.xml You can validate the profiles using the following command # installadm validate -n s11x86service -P /var/tmp/zone1_profile.xml Validating static profile zone1_profile.xml...  Passed # installadm validate -n s11x86service -P /var/tmp/zone2_profile.xml Validating static profile zone2_profile.xml...  Passed Next, associate the profiles with the install service The following example adds the zone1_profile.xml configuration profile to the s11x86service  install service and specifies that zone1 should use this profile. # installadm create-profile -n s11x86service  -f  /var/tmp/zone1_profile.xml -p zone1_profile -c zonename=zone1 The following example adds the zone2_profile.xml configuration profile to the s11x86service  install service and specifies that zone2 should use this profile. # installadm create-profile -n s11x86service  -f  /var/tmp/zone2_profile.xml -p zone2_profile -c zonename=zone2 You can verify the profiles creation using the following command # installadm list -n s11x86service -p Service/Profile Name  Criteria --------------------  -------- s11x86service    zone1_profile      zonename = zone1    zone2_profile      zonename = zone2    gz_profile         None We can see that we have three profiles in the s11x86service  install service     Global Zone  gz_profile     zone1            zone1_profile     zone2            zone2_profile. Step 8: Global Zone setup Associate the global zone client with the manifest and the profile that we create in the previous steps The following example adds the manifest and profile to the client (global zone), where: gzmanifest  is the name of the manifest. gz_profile  is the name of the configuration profile. mac="0:14:4f:2:a:19" is the client (global zone) mac address s11x86service is the install service name. # installadm set-criteria -m  gzmanifest  –p  gz_profile  -c mac="0:14:4f:2:a:19" -n s11x86service You can verify the manifest and profile association using the following command # installadm list -n s11x86service -p  -m Service/Manifest Name  Status   Criteria ---------------------  ------   -------- s11x86service    gzmanifest                   mac  = 00:14:4F:02:0A:19    orig_default        Default  None Service/Profile Name  Criteria --------------------  -------- s11x86service    gz_profile         mac      = 00:14:4F:02:0A:19    zone2_profile      zonename = zone2    zone1_profile      zonename = zone1 Step 9: Provision the host with the Non-Global Zones The next step is to boot the client system off the network and provision it using the Automated Install service that we just set up. First, boot the client system. Figure 3 shows the network boot attempt (when done on an x86 system): Figure 3. Network Boot Then you will be prompted by a GRUB menu, with a timer, as shown in Figure 4. The default selection (the "Text Installer and command line" option) is highlighted.  Press the down arrow to highlight the second option labeled Automated Install, and then press Enter. The reason we need to do this is because we want to prevent a system from being automatically re-installed if it were to be booted from the network accidentally. Figure 4. GRUB Menu What follows is the continuation of a networked boot from the Automated Install server,. The client downloads a mini-root (a small set of files in which to successfully run the installer), identifies the location of the Automated Install manifest on the network, retrieves that manifest, and then processes it to identify the address of the IPS repository from which to obtain the desired software payload. Non-Global Zones are installed and configured on the first reboot after the Global Zone is installed. You can list all the Solaris Zones status using the following command # zoneadm list -civ Once the Zones are in running state you can login into the Zone using the following command # zlogin –z zone1 Troubleshooting Automated Installations If an installation to a client system failed, you can find the client log at /system/volatile/install_log. NOTE: Zones are not installed if any of the following errors occurs:     A zone config file is not syntactically correct.     A collision exists among zone names, zone paths, or delegated ZFS datasets in the set of zones to be installed     Required datasets are not configured in the global zone. For more troubleshooting information see “Installing Oracle Solaris 11 Systems” Conclusion This paper demonstrated the benefits of using the Automated Install server to simplify the Non Global Zones setup, including the creation and configuration of the global zone manifest and the Solaris Zones profiles.

    Read the article

  • deadlocks in the innodb status

    - by shantanuo
    Mysql sever has suddenly become very slow. There are no queries in the slow query log but the innodb status shows something like the following. Does it mean that it is due to innodb deadlock? if Yes, what is the way out? *************************** 1. row *************************** Status: ===================================== 100315 12:55:29 INNODB MONITOR OUTPUT ===================================== Per second averages calculated from the last 5 seconds ---------- SEMAPHORES ---------- OS WAIT ARRAY INFO: reservation count 187532, signal count 188120 Mutex spin waits 0, rounds 61908654, OS waits 33052 RW-shared spins 89241, OS waits 41948; RW-excl spins 5857, OS waits 1557 ------------------------ LATEST DETECTED DEADLOCK ------------------------ 100315 12:43:02 *** (1) TRANSACTION: TRANSACTION 0 56996536, ACTIVE 0 sec, process no 5000, OS thread id 3031395216 starting index read mysql tables in use 1, locked 1 LOCK WAIT 6 lock struct(s), heap size 1024, undo log entries 6 MySQL thread id 994, query id 7699751 localhost application Searching rows for update UPDATE QUERY *** (1) WAITING FOR THIS LOCK TO BE GRANTED: RECORD LOCKS space id 0 page no 4073 n bits 296 index `PRIMARY` of table `dbII/tbl_ticket_block_master` trx id 0 56996536 lock_mode X locks r ec but not gap waiting Record lock, heap no 141 PHYSICAL RECORD: n_fields 23; compact format; info bits 0 0: len 7; hex 33353837393936; asc 3587996;; 1: len 4; hex 800001f4; asc ;; 2: len 1; hex 47; asc G;; 3: len 2; hex 6f6b; asc ok;; 4: le n 6; hex 0000035957fe; asc YW ;; 5: len 7; hex 000000401737c0; asc @ 7 ;; 6: SQL NULL; 7: SQL NULL; 8: SQL NULL; 9: len 3; hex 8fb46e; asc n;; 10: SQL NULL; 11: len 1; hex 30; asc 0;; 12: len 0; hex ; asc ;; 13: SQL NULL; 14: len 1; hex 33; asc 3;; 15: len 4; hex 4b9ceebe ; asc K ;; 16: len 1; hex 30; asc 0;; 17: len 4; hex 80006ae8; asc j ;; 18: len 0; hex ; asc ;; 19: len 0; hex ; asc ;; 20: len 0; hex ; asc ;; 21: len 0; hex ; asc ;; 22: len 0; hex ; asc ;; *** (2) TRANSACTION: TRANSACTION 0 56996527, ACTIVE 0 sec, process no 5000, OS thread id 2961476496 fetching rows, thread declared inside InnoDB 237 mysql tables in use 3, locked 3 121 lock struct(s), heap size 11584, undo log entries 16 MySQL thread id 995, query id 7699729 localhost application Searching rows for update UPDATE QUERY *** (2) HOLDS THE LOCK(S): RECORD LOCKS space id 0 page no 4073 n bits 296 index `PRIMARY` of table `DBII/tbl_ticket_block_master` trx id 0 56996527 lock_mode X Record lock, heap no 1 PHYSICAL RECORD: n_fields 1; compact format; info bits 0 0: len 8; hex 73757072656d756d; asc supremum;; Record lock, heap no 2 PHYSICAL RECORD: n_fields 23; compact format; info bits 0 0: len 7; hex 33353837343631; asc 3587461;; 1: len 4; hex 800001f4; asc ;; 2: len 1; hex 47; asc G;; 3: len 6; hex 497373756564; asc Is sued;; 4: len 6; hex 000003425295; asc BR ;; 5: len 7; hex 8000000464012c; asc d ,;; 6: SQL NULL; 7: len 4; hex 80000058; asc X;; 8: len 1; hex 43; asc C;; 9: len 3; hex 8fb465; asc e;; 10: len 3; hex 8fb46d; asc m;; 11: len 1; hex 30; asc 0;; 12: len 0; hex ; asc ; ; 13: SQL NULL; 14: len 1; hex 33; asc 3;; 15: len 4; hex 4b9b33a2; asc K 3 ;; 16: len 3; hex 756d67; asc umg;; 17: len 4; hex 80006744; asc gD;; 18: len 0; hex ; asc ;; 19: len 0; hex ; asc ;; 20: len 0; hex ; asc ;; 21: len 0; hex ; asc ;; 22: len 0; hex ; asc ;;

    Read the article

  • Warning: Cannot modify header information - headers already sent

    - by Pankaj Khurana
    Hi, I am using code available on http://www.forosdelweb.com/f18/zip-lib-php-archivo-zip-vacio-431133/ for creating zip file. First file-zip.lib.php <?php /* $Id: zip.lib.php,v 1.1 2004/02/14 15:21:18 anoncvs_tusedb Exp $ */ // vim: expandtab sw=4 ts=4 sts=4: /** * Zip file creation class. * Makes zip files. * * Last Modification and Extension By : * * Hasin Hayder * HomePage : www.hasinme.info * Email : [email protected] * IDE : PHP Designer 2005 * * * Originally Based on : * * http://www.zend.com/codex.php?id=535&single=1 * By Eric Mueller <[email protected]> * * http://www.zend.com/codex.php?id=470&single=1 * by Denis125 <[email protected]> * * a patch from Peter Listiak <[email protected]> for last modified * date and time of the compressed file * * Official ZIP file format: http://www.pkware.com/appnote.txt * * @access public */ class zipfile { /** * Array to store compressed data * * @var array $datasec */ var $datasec = array(); /** * Central directory * * @var array $ctrl_dir */ var $ctrl_dir = array(); /** * End of central directory record * * @var string $eof_ctrl_dir */ var $eof_ctrl_dir = "\x50\x4b\x05\x06\x00\x00\x00\x00"; /** * Last offset position * * @var integer $old_offset */ var $old_offset = 0; /** * Converts an Unix timestamp to a four byte DOS date and time format (date * in high two bytes, time in low two bytes allowing magnitude comparison). * * @param integer the current Unix timestamp * * @return integer the current date in a four byte DOS format * * @access private */ function unix2DosTime($unixtime = 0) { $timearray = ($unixtime == 0) ? getdate() : getdate($unixtime); if ($timearray['year'] < 1980) { $timearray['year'] = 1980; $timearray['mon'] = 1; $timearray['mday'] = 1; $timearray['hours'] = 0; $timearray['minutes'] = 0; $timearray['seconds'] = 0; } // end if return (($timearray['year'] - 1980) << 25) | ($timearray['mon'] << 21) | ($timearray['mday'] << 16) | ($timearray['hours'] << 11) | ($timearray['minutes'] << 5) | ($timearray['seconds'] >> 1); } // end of the 'unix2DosTime()' method /** * Adds "file" to archive * * @param string file contents * @param string name of the file in the archive (may contains the path) * @param integer the current timestamp * * @access public */ function addFile($data, $name, $time = 0) { $name = str_replace('', '/', $name); $dtime = dechex($this->unix2DosTime($time)); $hexdtime = 'x' . $dtime[6] . $dtime[7] . 'x' . $dtime[4] . $dtime[5] . 'x' . $dtime[2] . $dtime[3] . 'x' . $dtime[0] . $dtime[1]; eval('$hexdtime = "' . $hexdtime . '";'); $fr = "\x50\x4b\x03\x04"; $fr .= "\x14\x00"; // ver needed to extract $fr .= "\x00\x00"; // gen purpose bit flag $fr .= "\x08\x00"; // compression method $fr .= $hexdtime; // last mod time and date // "local file header" segment $unc_len = strlen($data); $crc = crc32($data); $zdata = gzcompress($data); $zdata = substr(substr($zdata, 0, strlen($zdata) - 4), 2); // fix crc bug $c_len = strlen($zdata); $fr .= pack('V', $crc); // crc32 $fr .= pack('V', $c_len); // compressed filesize $fr .= pack('V', $unc_len); // uncompressed filesize $fr .= pack('v', strlen($name)); // length of filename $fr .= pack('v', 0); // extra field length $fr .= $name; // "file data" segment $fr .= $zdata; // "data descriptor" segment (optional but necessary if archive is not // served as file) $fr .= pack('V', $crc); // crc32 $fr .= pack('V', $c_len); // compressed filesize $fr .= pack('V', $unc_len); // uncompressed filesize // add this entry to array $this -> datasec[] = $fr; // now add to central directory record $cdrec = "\x50\x4b\x01\x02"; $cdrec .= "\x00\x00"; // version made by $cdrec .= "\x14\x00"; // version needed to extract $cdrec .= "\x00\x00"; // gen purpose bit flag $cdrec .= "\x08\x00"; // compression method $cdrec .= $hexdtime; // last mod time & date $cdrec .= pack('V', $crc); // crc32 $cdrec .= pack('V', $c_len); // compressed filesize $cdrec .= pack('V', $unc_len); // uncompressed filesize $cdrec .= pack('v', strlen($name) ); // length of filename $cdrec .= pack('v', 0 ); // extra field length $cdrec .= pack('v', 0 ); // file comment length $cdrec .= pack('v', 0 ); // disk number start $cdrec .= pack('v', 0 ); // internal file attributes $cdrec .= pack('V', 32 ); // external file attributes - 'archive' bit set $cdrec .= pack('V', $this -> old_offset ); // relative offset of local header $this -> old_offset += strlen($fr); $cdrec .= $name; // optional extra field, file comment goes here // save to central directory $this -> ctrl_dir[] = $cdrec; } // end of the 'addFile()' method /** * Dumps out file * * @return string the zipped file * * @access public */ function file() { $data = implode('', $this -> datasec); $ctrldir = implode('', $this -> ctrl_dir); return $data . $ctrldir . $this -> eof_ctrl_dir . pack('v', sizeof($this -> ctrl_dir)) . // total # of entries "on this disk" pack('v', sizeof($this -> ctrl_dir)) . // total # of entries overall pack('V', strlen($ctrldir)) . // size of central dir pack('V', strlen($data)) . // offset to start of central dir "\x00\x00"; // .zip file comment length } // end of the 'file()' method /** * A Wrapper of original addFile Function * * Created By Hasin Hayder at 29th Jan, 1:29 AM * * @param array An Array of files with relative/absolute path to be added in Zip File * * @access public */ function addFiles($files /*Only Pass Array*/) { foreach($files as $file) { if (is_file($file)) //directory check { $data = implode("",file($file)); $this->addFile($data,$file); } } } /** * A Wrapper of original file Function * * Created By Hasin Hayder at 29th Jan, 1:29 AM * * @param string Output file name * * @access public */ function output($file) { $fp=fopen($file,"w"); fwrite($fp,$this->file()); fclose($fp); } } // end of the 'zipfile' class ?> My second file newzip.php <? include("zip.lib.php"); $ziper = new zipfile(); $ziper->addFiles(array("index.htm")); //array of files // the next three lines force an immediate download of the zip file: header("Content-type: application/octet-stream"); header("Content-disposition: attachment; filename=test.zip"); echo $ziper -> file(); ?> I am getting this warning while executing newzip.php Warning: Cannot modify header information - headers already sent by (output started at E:\xampp\htdocs\demo\zip.lib.php:233) in E:\xampp\htdocs\demo\newzip.php on line 6 I am unable to figure out the reason for the same. Please help me on this. Thanks

    Read the article

  • Please Critique this PHP Login Script

    - by NightMICU
    Greetings, A site I developed was recently compromised, most likely by a brute force or Rainbow Table attack. The original log-in script did not have a SALT, passwords were stored in MD5. Below is an updated script, complete with SALT and IP address banning. In addition, it will send a Mayday email & SMS and disable the account should the same IP address or account attempt 4 failed log-ins. Please look it over and let me know what could be improved, what is missing, and what is just plain strange. Many thanks! <?php //Start session session_start(); //Include DB config include $_SERVER['DOCUMENT_ROOT'] . '/includes/pdo_conn.inc.php'; //Error message array $errmsg_arr = array(); $errflag = false; //Function to sanitize values received from the form. Prevents SQL injection function clean($str) { $str = @trim($str); if(get_magic_quotes_gpc()) { $str = stripslashes($str); } return $str; } //Define a SALT, the one here is for demo define('SALT', '63Yf5QNA'); //Sanitize the POST values $login = clean($_POST['login']); $password = clean($_POST['password']); //Encrypt password $encryptedPassword = md5(SALT . $password); //Input Validations //Obtain IP address and check for past failed attempts $ip_address = $_SERVER['REMOTE_ADDR']; $checkIPBan = $db->prepare("SELECT COUNT(*) FROM ip_ban WHERE ipAddr = ? OR login = ?"); $checkIPBan->execute(array($ip_address, $login)); $numAttempts = $checkIPBan->fetchColumn(); //If there are 4 failed attempts, send back to login and temporarily ban IP address if ($numAttempts == 1) { $getTotalAttempts = $db->prepare("SELECT attempts FROM ip_ban WHERE ipAddr = ? OR login = ?"); $getTotalAttempts->execute(array($ip_address, $login)); $totalAttempts = $getTotalAttempts->fetch(); $totalAttempts = $totalAttempts['attempts']; if ($totalAttempts >= 4) { //Send Mayday SMS $to = "[email protected]"; $subject = "Banned Account - $login"; $mailheaders = 'From: [email protected]' . "\r\n"; $mailheaders .= 'Reply-To: [email protected]' . "\r\n"; $mailheaders .= 'MIME-Version: 1.0' . "\r\n"; $mailheaders .= 'Content-type: text/html; charset=iso-8859-1' . "\r\n"; $msg = "<p>IP Address - " . $ip_address . ", Username - " . $login . "</p>"; mail($to, $subject, $msg, $mailheaders); $setAccountBan = $db->query("UPDATE ip_ban SET isBanned = 1 WHERE ipAddr = '$ip_address'"); $setAccountBan->execute(); $errmsg_arr[] = 'Too Many Login Attempts'; $errflag = true; } } if($login == '') { $errmsg_arr[] = 'Login ID missing'; $errflag = true; } if($password == '') { $errmsg_arr[] = 'Password missing'; $errflag = true; } //If there are input validations, redirect back to the login form if($errflag) { $_SESSION['ERRMSG_ARR'] = $errmsg_arr; session_write_close(); header('Location: http://somewhere.com/login.php'); exit(); } //Query database $loginSQL = $db->prepare("SELECT password FROM user_control WHERE username = ?"); $loginSQL->execute(array($login)); $loginResult = $loginSQL->fetch(); //Compare passwords if($loginResult['password'] == $encryptedPassword) { //Login Successful session_regenerate_id(); //Collect details about user and assign session details $getMemDetails = $db->prepare("SELECT * FROM user_control WHERE username = ?"); $getMemDetails->execute(array($login)); $member = $getMemDetails->fetch(); $_SESSION['SESS_MEMBER_ID'] = $member['user_id']; $_SESSION['SESS_USERNAME'] = $member['username']; $_SESSION['SESS_FIRST_NAME'] = $member['name_f']; $_SESSION['SESS_LAST_NAME'] = $member['name_l']; $_SESSION['SESS_STATUS'] = $member['status']; $_SESSION['SESS_LEVEL'] = $member['level']; //Get Last Login $_SESSION['SESS_LAST_LOGIN'] = $member['lastLogin']; //Set Last Login info $updateLog = $db->prepare("UPDATE user_control SET lastLogin = DATE_ADD(NOW(), INTERVAL 1 HOUR), ip_addr = ? WHERE user_id = ?"); $updateLog->execute(array($ip_address, $member['user_id'])); session_write_close(); //If there are past failed log-in attempts, delete old entries if ($numAttempts > 0) { //Past failed log-ins from this IP address. Delete old entries $deleteIPBan = $db->prepare("DELETE FROM ip_ban WHERE ipAddr = ?"); $deleteIPBan->execute(array($ip_address)); } if ($member['level'] != "3" || $member['status'] == "Suspended") { header("location: http://somewhere.com"); } else { header('Location: http://somewhere.com'); } exit(); } else { //Login failed. Add IP address and other details to ban table if ($numAttempts < 1) { //Add a new entry to IP Ban table $addBanEntry = $db->prepare("INSERT INTO ip_ban (ipAddr, login, attempts) VALUES (?,?,?)"); $addBanEntry->execute(array($ip_address, $login, 1)); } else { //increment Attempts count $updateBanEntry = $db->prepare("UPDATE ip_ban SET ipAddr = ?, login = ?, attempts = attempts+1 WHERE ipAddr = ? OR login = ?"); $updateBanEntry->execute(array($ip_address, $login, $ip_address, $login)); } header('Location: http://somewhere.com/login.php'); exit(); } ?>

    Read the article

  • Azure Diagnostics wrt Custom Logs and honoring scheduledTransferPeriod

    - by kjsteuer
    I have implemented my own TraceListener similar to http://blogs.technet.com/b/meamcs/archive/2013/05/23/diagnostics-of-cloud-services-custom-trace-listener.aspx . One thing I noticed is that that logs show up immediately in My Azure Table Storage. I wonder if this is expected with Custom Trace Listeners or because I am in a development environment. My diagnosics.wadcfg <?xml version="1.0" encoding="utf-8"?> <DiagnosticMonitorConfiguration configurationChangePollInterval="PT1M""overallQuotaInMB="4096" xmlns="http://schemas.microsoft.com/ServiceHosting/2010/10/DiagnosticsConfiguration"> <DiagnosticInfrastructureLogs scheduledTransferLogLevelFilter="Information" /> <Directories scheduledTransferPeriod="PT1M"> <IISLogs container="wad-iis-logfiles" /> <CrashDumps container="wad-crash-dumps" /> </Directories> <Logs bufferQuotaInMB="0" scheduledTransferPeriod="PT30M" scheduledTransferLogLevelFilter="Information" /> </DiagnosticMonitorConfiguration> I have changed my approach a bit. Now I am defining in the web config of my webrole. I notice when I set autoflush to true in the webconfig, every thing works but scheduledTransferPeriod is not honored because the flush method pushes to the table storage. I would like to have scheduleTransferPeriod trigger the flush or trigger flush after a certain number of log entries like the buffer is full. Then I can also flush on server shutdown. Is there any method or event on the CustomTraceListener where I can listen to the scheduleTransferPeriod? <system.diagnostics> <!--http://msdn.microsoft.com/en-us/library/sk36c28t(v=vs.110).aspx By default autoflush is false. By default useGlobalLock is true. While we try to be threadsafe, we keep this default for now. Later if we would like to increase performance we can remove this. see http://msdn.microsoft.com/en-us/library/system.diagnostics.trace.usegloballock(v=vs.110).aspx --> <trace> <listeners> <add name="TableTraceListener" type="Pos.Services.Implementation.TableTraceListener, Pos.Services.Implementation" /> <remove name="Default" /> </listeners> </trace> </system.diagnostics> I have modified the custom trace listener to the following: namespace Pos.Services.Implementation { class TableTraceListener : TraceListener { #region Fields //connection string for azure storage readonly string _connectionString; //Custom sql storage table for logs. //TODO put in config readonly string _diagnosticsTable; [ThreadStatic] static StringBuilder _messageBuffer; readonly object _initializationSection = new object(); bool _isInitialized; CloudTableClient _tableStorage; readonly object _traceLogAccess = new object(); readonly List<LogEntry> _traceLog = new List<LogEntry>(); #endregion #region Constructors public TableTraceListener() : base("TableTraceListener") { _connectionString = RoleEnvironment.GetConfigurationSettingValue("DiagConnection"); _diagnosticsTable = RoleEnvironment.GetConfigurationSettingValue("DiagTableName"); } #endregion #region Methods /// <summary> /// Flushes the entries to the storage table /// </summary> public override void Flush() { if (!_isInitialized) { lock (_initializationSection) { if (!_isInitialized) { Initialize(); } } } var context = _tableStorage.GetTableServiceContext(); context.MergeOption = MergeOption.AppendOnly; lock (_traceLogAccess) { _traceLog.ForEach(entry => context.AddObject(_diagnosticsTable, entry)); _traceLog.Clear(); } if (context.Entities.Count > 0) { context.BeginSaveChangesWithRetries(SaveChangesOptions.None, (ar) => context.EndSaveChangesWithRetries(ar), null); } } /// <summary> /// Creates the storage table object. This class does not need to be locked because the caller is locked. /// </summary> private void Initialize() { var account = CloudStorageAccount.Parse(_connectionString); _tableStorage = account.CreateCloudTableClient(); _tableStorage.GetTableReference(_diagnosticsTable).CreateIfNotExists(); _isInitialized = true; } public override bool IsThreadSafe { get { return true; } } #region Trace and Write Methods /// <summary> /// Writes the message to a string buffer /// </summary> /// <param name="message">the Message</param> public override void Write(string message) { if (_messageBuffer == null) _messageBuffer = new StringBuilder(); _messageBuffer.Append(message); } /// <summary> /// Writes the message with a line breaker to a string buffer /// </summary> /// <param name="message"></param> public override void WriteLine(string message) { if (_messageBuffer == null) _messageBuffer = new StringBuilder(); _messageBuffer.AppendLine(message); } /// <summary> /// Appends the trace information and message /// </summary> /// <param name="eventCache">the Event Cache</param> /// <param name="source">the Source</param> /// <param name="eventType">the Event Type</param> /// <param name="id">the Id</param> /// <param name="message">the Message</param> public override void TraceEvent(TraceEventCache eventCache, string source, TraceEventType eventType, int id, string message) { base.TraceEvent(eventCache, source, eventType, id, message); AppendEntry(id, eventType, eventCache); } /// <summary> /// Adds the trace information to a collection of LogEntry objects /// </summary> /// <param name="id">the Id</param> /// <param name="eventType">the Event Type</param> /// <param name="eventCache">the EventCache</param> private void AppendEntry(int id, TraceEventType eventType, TraceEventCache eventCache) { if (_messageBuffer == null) _messageBuffer = new StringBuilder(); var message = _messageBuffer.ToString(); _messageBuffer.Length = 0; if (message.EndsWith(Environment.NewLine)) message = message.Substring(0, message.Length - Environment.NewLine.Length); if (message.Length == 0) return; var entry = new LogEntry() { PartitionKey = string.Format("{0:D10}", eventCache.Timestamp >> 30), RowKey = string.Format("{0:D19}", eventCache.Timestamp), EventTickCount = eventCache.Timestamp, Level = (int)eventType, EventId = id, Pid = eventCache.ProcessId, Tid = eventCache.ThreadId, Message = message }; lock (_traceLogAccess) _traceLog.Add(entry); } #endregion #endregion } }

    Read the article

  • MS Ajax Libraries and Configured Assemblies

    - by smehaffie
    Use Case You have a brand new IIS servers that has .Net 3.5 installed and are migrating sites to the new servers.  In the process of migrating sites you come across some sites that get an error about the version of AJAX libraries being references in the web.config.  In the web.config all the entries reference 1.0.61025.0, but the older version of the AJAX libraries are not installed on the new servers, only the latest version is installed that comes with .Net 3.5.  So what are the options to fix this issue. Solutions 1) Install the older version of the AJAX Libraries: Although this works, IMO it is never a great idea to install an older version of a library after a newer version has been installed.  Plus, if all new application use the latest versions, is it worth the effort of installing the older version for a few legacy applications? 2) Update the web.config files so all references use latest version (3.5.0.0):  This option is very time consuming and error prone. In addition, you will also have to update any pages where there is a register tag for the older libraries as well.  This would require you to redeploy any application that have this issue. 3) Use the Configured Assembly capabilities of .Net (aka: Assembly Bindings) to make any application that uses the older AJAX libraries to use the new AJAX libraries.  IMO, this is the easiest, quickest and least invasive way to fix the issue.  Below are the steps to implement this fix. Solution #3 Do the following steps on the IIS servers that the issue is occurring.  The 2 assemblies that need assemblies bindings created are: System.Web.Extension & System.Web.Extensions.Design 1) Go to Start - > All Program -> Administrative Tools -> Microsoft .NET Framework 2.0 Configuration. 2) Right click on "Configured Assemblies" to view list of configured assemblies. 3) Left Click on right pane to bring up menu and choose "Add". 4) Make sure "Choose and assembly from the assembly cache is checked" and click the "Choose Assembly" button. 5) Choose System.Web.Extension (does not matter what version). 6) Click the "Finish" button. 7) Binding Policy Tab      - Enter Requested Version = 1.0.61025.0      - Enter New Version = 3.5.0.0 8) Repeat steps 2-7 for the System.Web.Extensions.Design assembly. --------------------------------------------------------------------------------------------------------------------------------------------------------- Note: If "Microsoft .NET Framework 2.0 Configuration does not exist under Admin tools use mmc to access it (see below) 1) Start -> Run -> Enter MMC 2) File - > Add/Remove Snap-In then Click "Add" button 3) Choose ".Net 2.0 Configuration" then click "Add" button and then the "Close" Button. 4) On "Add/Remove Snapin" windows click the "OK" Button. 5) Expand the tree on the right and you can start following the directions above for adding the configured assemblies. ---------------------------------------------------------------------------------------------------------------------------------------------------------

    Read the article

  • Handling WCF Service Paths in Silverlight 4 – Relative Path Support

    - by dwahlin
    If you’re building Silverlight applications that consume data then you’re probably making calls to Web Services. We’ve been successfully using WCF along with Silverlight for several client Line of Business (LOB) applications and passing a lot of data back and forth. Due to the pain involved with updating the ServiceReferences.ClientConfig file generated by a Silverlight service proxy (see Tim Heuer’s post on that subject to see different ways to deal with it) we’ve been using our own technique to figure out the service URL. Going that route makes it a peace of cake to switch between development, staging and production environments. To start, we have a ServiceProxyBase class that handles identifying the URL to use based on the XAP file’s location (this assumes that the service is in the same Web project that serves up the XAP file). The GetServiceUrlBase() method handles this work: public class ServiceProxyBase { public ServiceProxyBase() { if (!IsDesignTime) { ServiceUrlBase = GetServiceUrlBase(); } } public string ServiceUrlBase { get; set; } public static bool IsDesignTime { get { return (Application.Current == null) || (Application.Current.GetType() == typeof (Application)); } } public static string GetServiceUrlBase() { if (!IsDesignTime) { string url = Application.Current.Host.Source.OriginalString; return url.Substring(0, url.IndexOf("/ClientBin", StringComparison.InvariantCultureIgnoreCase)); } return null; } } Silverlight 4 now supports relative paths to services which greatly simplifies things.  We changed the code above to the following: public class ServiceProxyBase { private const string ServiceUrlPath = "../Services/JobPlanService.svc"; public ServiceProxyBase() { if (!IsDesignTime) { ServiceUrl = ServiceUrlPath; } } public string ServiceUrl { get; set; } public static bool IsDesignTime { get { return (Application.Current == null) || (Application.Current.GetType() == typeof (Application)); } } public static string GetServiceUrl() { if (!IsDesignTime) { return ServiceUrlPath; } return null; } } Our ServiceProxy class derives from ServiceProxyBase and handles creating the ABC’s (Address, Binding, Contract) needed for a WCF service call. Looking through the code (mainly the constructor) you’ll notice that the service URI is created by supplying the base path to the XAP file along with the relative path defined in ServiceProxyBase:   public class ServiceProxy : ServiceProxyBase, IServiceProxy { private const string CompletedEventargs = "CompletedEventArgs"; private const string Completed = "Completed"; private const string Async = "Async"; private readonly CustomBinding _Binding; private readonly EndpointAddress _EndPointAddress; private readonly Uri _ServiceUri; private readonly Type _ProxyType = typeof(JobPlanServiceClient); public ServiceProxy() { _ServiceUri = new Uri(Application.Current.Host.Source, ServiceUrl); var elements = new BindingElementCollection { new BinaryMessageEncodingBindingElement(), new HttpTransportBindingElement { MaxBufferSize = 2147483647, MaxReceivedMessageSize = 2147483647 } }; // order of entries in collection is significant: dumb _Binding = new CustomBinding(elements); _EndPointAddress = new EndpointAddress(_ServiceUri); } #region IServiceProxy Members /// <summary> /// Used to call a WCF service operation. /// </summary> /// <typeparam name="T">The type of EventArgs that will be returned by the service operation.</typeparam> /// <param name="callback">The method to call once the WCF call returns (the callback).</param> /// <param name="parameters">Any parameters that the service operation expects.</param> public void CallService<T>(EventHandler<T> callback, params object[] parameters) where T : EventArgs { try { var proxy = new JobPlanServiceClient(_Binding, _EndPointAddress); string action = typeof (T).Name.Replace(CompletedEventargs, String.Empty); _ProxyType.GetEvent(action + Completed).AddEventHandler(proxy, callback); _ProxyType.InvokeMember(action + Async, BindingFlags.InvokeMethod, null, proxy, parameters); } catch (Exception exp) { MessageBox.Show("Unable to use ServiceProxy.CallService to retrieve data: " + exp.Message); } } #endregion } The relative path support for calling services in Silverlight 4 definitely simplifies code and is yet another good reason to move from Silverlight 3 to Silverlight 4.   For more information about onsite, online and video training, mentoring and consulting solutions for .NET, SharePoint or Silverlight please visit http://www.thewahlingroup.com.

    Read the article

  • NuGet package manager in Visual Studio 2012

    - by sreejukg
    NuGet is a package manager that helps developers to automate the process of installing and upgrading packages in Visual Studio projects. It is free and open source. You can see the project in codeplex from the below link. http://nuget.codeplex.com/ Now days developers needed to work with several packages or libraries from various sources, a typical e.g. is jQuery. You will hardly find a website that not uses jQuery. When you include these packages as manually copying the files, it is difficult to task to update these files as new versions get released. NuGet is a Visual studio add on, that comes by default with Visual Studio 2012 that manages such packages. So by using NuGet, you can include new packages to you project as well as update existing ones with the latest versions. NuGet is a Visual Studio extension, and happy news for developers, it is shipped with Visual Studio 2012 by default. In this article, I am going to demonstrate how you can include jQuery (or anything similar) to a .Net project using the NuGet package manager. I have Visual Studio 2012, and I created an empty ASP.Net web application. In the solution explorer, the project looks like following. Now I need to add jQuery for this project, for this I am going to use NuGet. From solution explorer, right click the project, you will see “Manage NuGet Packages” Click on the Manage NuGet Packages options so that you will get the NuGet Package manager dialog. Since there is no package installed in my project, you will see “no packages installed” message. From the left menu, select the online option, and in the Search box (that is available in the top right corner) enter the name of the package you are looking for. In my case I just entered jQuery. Now NuGet package manager will search online and bring all the available packages that match my search criteria. You can select the right package and use the Install button just next to the package details. Also in the right pane, it will show the link to project information and license terms, you can see more details of the project you are looking for from the provided links. Now I have selected to install jQuery. Once installed successfully, you can find the green icon next to it that tells you the package has been installed successfully to your project. Now if you go to the Installed packages link from the left menu of package manager, you can see jQuery is installed and you can uninstall it by just clicking on the Uninstall button. Now close the package manager dialog and let us examine the project in solution explorer. You can see some new entries in your project. One is Scripts folder where the jQuery got installed, and a packages.config file. The packages.config is xml file that tells the NuGet package manager, the id and the version of the package you install. Based on this file NuGet package manager will identify the installed packages and the corresponding versions. Installing packages using NuGet package manager will save lot of time for developers and developers can get upgrades for the installed packages very easily.

    Read the article

  • Tulsa SharePoint Interest Group - How SharePoint 2010 Business Connectivity Services could change yo

    - by dmccollough
    Bio: Corey Roth is a consultant at Stonebridge specializing in SharePoint solutions in the Oil & Gas Industry. He has ten plus years of experience delivering solutions in the energy, travel, advertising and consumer electronics verticals. Corey has always focused on rapid adoption of new Microsoft technologies including Visual Studio 2010, SharePoint 2010, .NET Framework 4.0, LINQ, and SilverLight. He also contributed greatly to the beta phases of Visual Studio 2005. For his contributions, he was awarded the Microsoft Award for Customer Excellence (ACE). Corey is a graduate of Oklahoma State University. Corey is a member of the .NET Mafia (www.dotnetmafia.com) where he blogs about the latest technology and SharePoint. Abstract: How SharePoint 2010 Business Connectivity Services could change your life - The New BDC How many hours have your wasted building simple ASP.NET applications to do nothing more than simple CRUD operations against a database.  Many tools have made this easier, but now it's so easy, you'll be up and running in minutes.  This session will show you hot easy it is to get started integrating external data from your line of business systems in SharePoint 2010.  You will learn how to register an external content type using SharePoint Designer based upon a database table or web service and then build an external list.  With external lists, you will see how you can perform CRUD operations on your line of business directly from SharePoint without ever having to do manual configuration in XML files.  Finally, we will walk through how to create custom edit forms for your list using InfoPath 2010. Agenda: 6pm - 6:30 Pizza and Mingle - Sponsored by TekSystems 6:30 - 6:45 Announcements 6:45 - 7:45 Presentation! 7:45 - 8:00 Drawings and Door Prizes Location: TCC (Tulsa Community College) Northeast Campus 3727 East Apache Tulsa, OK 74115 918-594-8000 Campus Map | Live | Yahoo | Google | MapQuest Door Prizes: We will be giving away one of each of these: XBox 360 - Halo 3 ODST Telerik Premium Collection ($1300.00 value) ReSharper ($199.00 value) SQLSets ($149.00 value) 64 bit Windows 7 Introducing Windows 7 for Developers Developing Service-Oriented AJAX Applications on the Microsoft Platform Sponsors: Thanks to our sponsors: TekSystems - Thanks for purchasing the Pizza for our meetings. ISOCentric - Thanks for providing us hosting for the groups web site. Tulsa Community College - Thanks for providing us a place to have our meetings. NEVRON - Thanks for providing us prizes to give away. INETA.org - For allowing us to be a Charter Member and providing awesome Speakers! PERPETUUM Software - Thanks for providing us prizes to give away. Telerik - Thanks for providing us prizes to give away. GrapeCity - Thanks for providing us prizes to give away. SQLSets - Thanks for providing us prizes to give away. K2 - Thanks for providing us prizes to give away. Microsoft - For providing us with a lot of support and product giveaways! Orielly books - For providing us with books and discounts. Wrox books - For providing us with books and discounts. Have any special requests? Let us know at this link: http://tinyurl.com/lg5o38. RSVP for this month's meeting by responding to this thread: http://tinyurl.com/yafkzel . (Must be logged in to the site) Be SURE to RSVP no later than Noon on April 12th and you will get an extra entry for the prize drawings! So, do it now, before you forget and miss out! Show up for the first time or bring a new buddy and you both get TWO extra entries!

    Read the article

  • Logging connection strings

    If you some of the dynamic features of SSIS such as package configurations or property expressions then sometimes trying to work out were your connections are pointing can be a bit confusing. You will work out in the end but it can be useful to explicitly log this information so that when things go wrong you can just review the logs. You may wish to develop this idea further and encapsulate such logging into a custom task, but for now lets keep it simple and use the Script Task. The Script Task code below will raise an Information event showing the name and connection string for a connection. Imports System Imports Microsoft.SqlServer.Dts.Runtime Public Class ScriptMain Public Sub Main() Dim fireAgain As Boolean ' Get the connection string, we need to know the name of the connection Dim connectionName As String = "My OLE-DB Connection" Dim connectionString As String = Dts.Connections(connectionName).ConnectionString ' Format the message and log it via an information event Dim message As String = String.Format("Connection ""{0}"" has a connection string of ""{1}"".", _ connectionName, connectionString) Dts.Events.FireInformation(0, "Information", message, Nothing, 0, fireAgain) Dts.TaskResult = Dts.Results.Success End Sub End Class Building on that example it is probably more flexible to log all connections in a package as shown in the next example. Imports System Imports Microsoft.SqlServer.Dts.Runtime Public Class ScriptMain Public Sub Main() Dim fireAgain As Boolean ' Loop through all connections in the package For Each connection As ConnectionManager In Dts.Connections ' Get the connection string and log it via an information event Dim message As String = String.Format("Connection ""{0}"" has a connection string of ""{1}"".", _ connection.Name, connection.ConnectionString) Dts.Events.FireInformation(0, "Information", message, Nothing, 0, fireAgain) Next Dts.TaskResult = Dts.Results.Success End Sub End Class By using the Information event it makes it readily available in the designer, for example the Visual Studio Output window (Ctrl+Alt+O) or the package designer Execution Results tab, and also allows you to readily control the logging by choosing which events to log in the normal way. Now before somebody starts commenting that this is a security risk, I would like to highlight good practice for building connection managers. Firstly the Password property, or any other similar sensitive property is always defined as write-only, and secondly the connection string property only uses the public properties to assemble the connection string value when requested. In other words the connection string will never contain the password. I have seen a couple of cases where this is not true, but that was just bad development by third-parties, you won’t find anything like that in the box from Microsoft.   Whilst writing this code it made me wish that there was a custom log entry that you could just turn on that did this for you, but alas connection managers do not even seem to support custom events. It did however remind me of a very useful event that is often overlooked and fits rather well alongside connection string logging, the Execute SQL Task’s custom ExecuteSQLExecutingQuery event. To quote the help reference Custom Messages for Logging - Provides information about the execution phases of the SQL statement. Log entries are written when the task acquires connection to the database, when the task starts to prepare the SQL statement, and after the execution of the SQL statement is completed. The log entry for the prepare phase includes the SQL statement that the task uses. It is the last part that is so useful, how often have you used an expression to derive a SQL statement and you want to log that to make sure the correct SQL is being returned? You need to turn it one, by default no custom log events are captured, but I’ll refer you to a walkthrough on setting up the logging for ExecuteSQLExecutingQuery by Jamie.

    Read the article

  • Best WordPress Shopping Cart & Ecommerce Plugins

    - by Edward
    A versatile WordPress Shopping Cart plugin can help you create a feature-rich online store on your WordPress-powered website or blog. Some are so advanced that you can get your store up and running in minutes. Some plugins allow you to take ecommerce to a next level with their high end customization tools. Here is a list of best WP shopping cart plugins available: Cart66 One of the best WordPress plugin with lots of features, great quality and ease of use. It accepts few more payment getways such as PayPal Website Payments Standard, PayPal Website Payments Professional, PayPal Express Checkout, eProcessing Network etc. It has flexible design options, recurring payments for subscriptions, memberships, and payment plans, Easy PCI Compliance – Safe and Secure. It is fast and efficient, one can sell digital and physical products and support is good. Price: Standard $49 & Professional $99 Details Download StorePress StorePress is a WordPress theme, which is fully coded. It comes with scripts that can change a WordPress blog into a veritable e-commerce virtual store. With this great premium WordPress theme, one can start affiliate stores, or promote affiliate products. Price: Single $59.99 & Developer License $119.99 Details Download WordPress eStore Plugin This shopping cart plugin comes with easy checkout, ease of design and use, automatic instant digital product delivery, Next Gen gallery integration, autoresponder integration etc. It is a lightweight shopping cart and allows multi site license. This plugin offers an amazingly comprehensive toolkit that will ensure your online shop is almost just plug-and-play. Price: $49.99 Details Download Shoppers Press Shoppers press is a premium cart for Word Press that comes with 20+ to choose from and 20+ built in payment gateways. It features one-click setups, personalized user accounts, easy management tools, detailed sales tracking, promotional options, a variety of product import tools, and many more features Price:$79 Details Download WordPress Shopping Cart plugin The WordPress Shopping Cart plugin by Tribulant quickly and seamlessly integrates an online shop with a fully functional shopping cart interface into any WordPress website. It has easy to use interface, which enables set up of multiple products and categorize and organizing them into multiple product categories. It also has many more attractive features. Price: $49.99 Details Download WP e-commerce WP e-commerce is a free full-featured shopping cart plugin for WordPress. It is a full featured shopping cart and boasts of easy checkout. It offers a wide range of features including SSL compatibility, customization and merchandising, integrated payment processing solutions including manual payment, Google Checkout and PayPal Payments, and email marketing. It is wordpress and social networking integrated. It is customizable by use of PHP template tag, wordpress shortcode and widgets. Details Download YAK for WordPress YAK is an open source shopping cart plugin for WordPress. It associates products with weblog entries (in other words, posts), so the post ID also becomes the product code. It supports both pages and posts as products, handles different types of product through categories. YAK supports downloadable products, so any e-books, plugins, or zip files you’re marketing can be easily purchased and dowloaded. Details Download Market Press It is another shopping cart full of many features. It offers following features such as assign categories and tags to products to make them easy to find, stock tracking with alerts, order management/alerts, fully customizable email messages, full support for most major currencies, fully customizable store urls/slugs, customers can checkout without being a site user etc. Expensive, but good option for those who can afford it. Price: $17.42/month Details Download Shopp It is an excellent shopping cart plugin for Word Press. This plugin is extremely easy to install and use. It has a cleaner interface. The customer support is good. Use can easily customize the look of the cart by using its amazing features. Price: $55 Details Download Related posts:8 PHP Shopping Cart Software for Reliable Ecommerce Solution Shopping Cart SEO 8 Free Open Source Shopping Carts

    Read the article

  • Change Desktop Resolution With a Keyboard Shortcut

    - by Matthew Guay
    Do you find yourself changing your monitor resolution several times a day?  If so, you might like this handy way to set a keyboard shortcut for your most-used resolutions. Most users rarely have to change their screen resolution often, as LCD monitors usually only look best at their native resolution.  But netbooks present a unique situation, as their native resolution is usually only 1024×600.  Some newer netbooks offer higher resolutions which may not looks as crisp as the native resolution but can be handy for using a program that expects a higher resolution.  This is the perfect situation for a keyboard shortcut to help you change the resolution without having to hassle with dialogs and menus each time, and HRC – HotKey Resolution Changer makes it easy to do. Create Keyboard Shortcuts Download the HRC – HotKey Resolution Changer (link below), unzip, and then run HRC.exe in the folder. This will start a tray icon, and will not automatically open the HRC window.  You don’t have to install HRC.  Double-click the tray icon to open it.  Note: Windows 7 automatically hides new tray icons, so if you can’t see it, click the arrow to see the hidden tray icons. By default, HRC will show two entries with your default resolutions, color depth, and refresh rate. Add a keyboard shortcut by clicking the Change button over the resolution.  Press the keyboard shortcut you want to press to switch to that resolution; we entered Ctrl+Alt+1 for our default resolution.  Make sure not to use a keyboard shortcut you use in another application, as this will override it.  Click Set when you’ve entered the hotkey(s) you want. Now, on the second entry, select the resolution you want for your alternate resolution.  The drop-down list will only show your monitor’s supported resolutions, so you don’t have to worry about choosing an incorrect resolution.  You can also set a different color depth or refresh rate for this resolution.  Now add a keyboard shortcut for this resolution as well. You can set keyboard shortcuts for up to 9 different resolutions with HRC.  Click the Select number of HotKeys button on the left, and choose the number of resolutions you want to set.  Here we have unique keyboard shortcuts for our three most-used resolutions on our netbook. HRC must be kept running to use the keyboard shortcuts, so click the Minimize to tray icon which is the second icon to the right.  This will keep it running in the tray. If you want to be able to change your resolution anytime, you’ll want HRC to automatically start with Windows.  Create a shortcut to HRC, and paste it into your Windows startup folder.  You can easily open this folder by entering the following in the Run command or in the address bar in Explorer: %appdata%\Microsoft\Windows\Start Menu\Programs\Startup   Conclusion HRC- HotKey Resolution Changer gives you a great way to quickly change your screen resolution with a keyboard shortcut.  Whether or not you love keyboard shortcuts, this is still a much easier way to switch between your most commonly used resolutions. Download HRC – HotKey Resolution Changer Similar Articles Productive Geek Tips Create a Keyboard Shortcut to Access Hidden Desktop Icons and FilesGet Mac’s Hide Others (cmd+opt+H) Keyboard Shortcut for WindowsHide Desktop Icon Text on Windows 7 or VistaShow Keyboard Shortcut Access Keys in Windows VistaKeyboard Ninja: 21 Keyboard Shortcut Articles TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips VMware Workstation 7 Acronis Online Backup DVDFab 6 Revo Uninstaller Pro Use Flixtime To Create Video Slideshows Creating a Password Reset Disk in Windows Bypass Waiting Time On Customer Service Calls With Lucyphone MELTUP – "The Beginning Of US Currency Crisis And Hyperinflation" Enable or Disable the Task Manager Using TaskMgrED Explorer++ is a Worthy Windows Explorer Alternative

    Read the article

  • Pie Charts Just Don't Work When Comparing Data - Number 10 of Top 10 Reasons to Never Ever Use a Pie

    - by Tony Wolfram
    When comparing data, which is what a pie chart is for, people have a hard time judging the angles and areas of the multiple pie slices in order to calculate how much bigger one slice is than the others. Pie Charts Don't Work A slice of pie is good for serving up a portion of desert. It's not good for making a judgement about how big the slice is, what percentage of 100 it is, or how it compares to other slices. People have trouble comparing angles and areas to each other. Controlled studies show that people will overestimate the percentage that a pie slice area represents. This is because we have trouble calculating the area based on the space between the two angles that define the slice. This picture shows how a pie chart is useless in determing the largest value when you have to compare pie slices.   You can't compare angles and slice areas to each other. Human perception and cognition is poor when viewing angles and areas and trying to make a mental comparison. Pie charts overload the working memory, forcing the person to make complicated calculations, and at the same time make a decision based on those comparisons. What's the point of showing a pie chart when you want to compare data, except to say, "well, the slices are almost the same, but I'm not really sure which one is bigger, or by how much, or what order they are from largest to smallest. But the colors sure are pretty. Plus, I like round things. Oh,was I suppose to make some important business decision? Sorry." Bad Choices and Bad Decisions Interaction Designers, Graphic Artists, Report Builders, Software Developers, and Executives have all made the decision to use pie charts in their reports, software applications, and dashboards. It was a bad decision. It was a poor choice. There are always better options and choices, yet the designer still made the decision to use a pie chart. I'll expore why people make such poor choices in my upcoming blog entires. (Hint: It has more to do with emotions than with analytical thinking.) I've outlined my opinions and arguments about the evils of using pie charts in "Countdown of Top 10 Reasons to Never Ever Use a Pie Chart." Each of my next 10 blog entries will support these arguments with illustrations, examples, and references to studies. But my goal is not to continuously and endlessly rage against the evils of using pie charts. This blog is not about pie charts. This blog is about understanding why designers choose to use a pie chart. Why, when give better alternatives, and acknowledging the shortcomings of pie charts, do designers over and over again still freely choose to place a pie chart in a report? As an extra treat and parting shot, check out the nice pie chart that Wikipedia uses to illustrate the United States population by state.   Remember, somebody chose to use this pie chart, with all its glorious colors, and post it on Wikipedia for all the world to see. My next blog will give you a better alternative for displaying comparable data - the sorted bar chart.

    Read the article

  • Calling Web Services in classic ASP

    - by cabhilash
      Last day my colleague asked me the provide her a solution to call the Web service from classic ASP. (Yes Classic ASP. still people are using this :D ) We can call web service SOAP toolkit also. But invoking the service using the XMLHTTP object was more easier & fast. To create the Service I used the normal Web Service in .Net 2.0 with [Webmethod] public class WebService1 : System.Web.Services.WebService { [WebMethod] public string HelloWorld(string name){return name + " Pay my dues :) "; // a reminder to pay my consultation fee :D} } In Web.config add the following entry in System.web<webServices><protocols><add name="HttpGet"/><add name="HttpPost"/></protocols></webServices> Alternatively, you can enable these protocols for all Web services on the computer by editing the <protocols> section in Machine.config. The following example enables HTTP GET, HTTP POST, and also SOAP and HTTP POST from localhost: <protocols> <add name="HttpSoap"/> <add name="HttpPost"/> <add name="HttpGet"/> <add name="HttpPostLocalhost"/> <!-- Documentation enables the documentation/test pages --> <add name="Documentation"/> </protocols> By adding these entries I am enabling the HTTPGET & HTTPPOST (After .Net 1.1 by default HTTPGET & HTTPPOST is disabled because of security concerns)The .NET Framework 1.1 defines a new protocol that is named HttpPostLocalhost. By default, this new protocol is enabled. This protocol permits invoking Web services that use HTTP POST requests from applications on the same computer. This is true provided the POST URL uses http://localhost, not http://hostname. This permits Web service developers to use the HTML-based test form to invoke the Web service from the same computer where the Web service resides. Classic ASP Code to call Web service <%Option Explicit Dim objRequest, objXMLDoc, objXmlNode Dim strRet, strError, strNome Dim strName strName= "deepa" Set objRequest = Server.createobject("MSXML2.XMLHTTP") With objRequest .open "GET", "http://localhost:3106/WebService1.asmx/HelloWorld?name=" & strName, False .setRequestHeader "Content-Type", "text/xml" .setRequestHeader "SOAPAction", "http://localhost:3106/WebService1.asmx/HelloWorld" .send End With Set objXMLDoc = Server.createobject("MSXML2.DOMDocument") objXmlDoc.async = false Response.ContentType = "text/xml" Response.Write(objRequest.ResponseText) %> In Line 6 I created an MSXML XMLHTTP object. Line 9 Using the HTTPGET protocol I am openinig connection to WebService Line 10:11 – setting the Header for the service In line 15, I am getting the output from the webservice in XML Doc format & reading the responseText(line 18). In line 9 if you observe I am passing the parameter strName to the Webservice You can pass multiple parameters to the Web service by just like any other QueryString Parameters. In similar fashion you can invoke the Web service using HTTPPost. Only you have to ensure that the form contains all th required parameters for webmethod.  Happy coding !!!!!!!

    Read the article

  • New security options in UCM Patch Set 3

    - by kyle.hatlestad
    While the Patch Set 3 (PS3) release was mostly focused on bug fixes and such, some new features sneaked in there. One of those new features is to the security options. In 10gR3 and prior versions, UCM had a component called Collaboration Manager which allowed for project folders to be created and groups of users assigned as members to collaborate on documents. With this component came access control lists (ACL) for content and folders. Users could assign specific security rights on each and every document and folder within a project. And it was even possible to enable these ACL's without having the Collaboration Manager component enabled (see technote# 603148.1). When 11g came out, Collaboration Manager was no longer available. But the configuration settings to turn on ACLs were still there. Well, in PS3 they're implemented slightly differently. And there is a new component available which adds an additional dimension to define security on the object, Roles. So now instead of selecting individual users or groups of users (defined as an Alias in User Admin), you can select a particular role. And if a user has that role, they are granted that level of access. This can allow for a much more flexible and manageable security model instead of trying to manage with just user and group access as people come and go in the organization. The way that it is enabled is still through configuration entries. First log in as an administrator and go to Administration -> Admin Server. On the Component Manager page, click the 'advanced component manager' link in the description paragraph at the top. In the list of Disabled Components, enable the RoleEntityACL component. Then click the General Configuration link on the left. In the Additional Configuration Variables text area, enter the new configuration values: UseEntitySecurity=true SpecialAuthGroups=<comma separated list of Security Groups to honor ACLs> The SpecialAuthGroups should be a list of Security Groups that honor the ACL fields. If an ACL is applied to a content item with a Security Group outside this list, it will be ignored. Save the settings and restart the instance. Upon restart, three new metadata fields will be created: xClbraUserList, xClbraAliasList, xClbraRoleList. If you are using OracleTextSearch as the search indexer, be sure to run a Fast Rebuild on the collection. On the Check In, Search, and Update pages, values are added by simply typing in the value and getting a type-ahead list of possible values. Select the value, click Add and then set the level of access (Read, Write, Delete, or Admin). If all of the fields are blank, then it simply falls back to just Security Group and Account access. For Users and Groups, these values are automatically picked up from the corresponding database tables. In the case of Roles, this is an explicitly defined list of choices that are made available. These values must match the role that is being defined from WebLogic Server or you LDAP/AD repository. To add these values, go to Administration -> Admin Applets -> Configuration Manager. On the Views tab, edit the values for the ExternalRolesView. By default, 'guest' and 'authenticated' are added. Once added to through the view, they will be available to select from for the Roles Access List. As for how they are stored in the metadata fields, each entry starts with it's identifier: ampersand (&) symbol for users, "at" (@) symbol for groups, and colon (:) for roles. Following that is the entity name. And at the end is the level of access in paranthesis. e.g. (RWDA). And each entry is separated by a comma. So if you were populating values through batch loader or an external source, the values would be defined this way. Detailed information on Access Control Lists can be found in the Oracle Fusion Middleware System Administrator's Guide for Oracle Content Server.

    Read the article

  • WCF REST Service Activation Errors when AspNetCompatibility is enabled

    - by Rick Strahl
    I’m struggling with an interesting problem with WCF REST since last night and I haven’t been able to track this down. I have a WCF REST Service set up and when accessing the .SVC file it crashes with a version mismatch for System.ServiceModel: Server Error in '/AspNetClient' Application. Could not load type 'System.ServiceModel.Activation.HttpHandler' from assembly 'System.ServiceModel, Version=3.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089'.Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.TypeLoadException: Could not load type 'System.ServiceModel.Activation.HttpHandler' from assembly 'System.ServiceModel, Version=3.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089'.Source Error: An unhandled exception was generated during the execution of the current web request. Information regarding the origin and location of the exception can be identified using the exception stack trace below. Stack Trace: [TypeLoadException: Could not load type 'System.ServiceModel.Activation.HttpHandler' from assembly 'System.ServiceModel, Version=3.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089'.] System.RuntimeTypeHandle.GetTypeByName(String name, Boolean throwOnError, Boolean ignoreCase, Boolean reflectionOnly, StackCrawlMarkHandle stackMark, Boolean loadTypeFromPartialName, ObjectHandleOnStack type) +0 System.RuntimeTypeHandle.GetTypeByName(String name, Boolean throwOnError, Boolean ignoreCase, Boolean reflectionOnly, StackCrawlMark& stackMark, Boolean loadTypeFromPartialName) +95 System.RuntimeType.GetType(String typeName, Boolean throwOnError, Boolean ignoreCase, Boolean reflectionOnly, StackCrawlMark& stackMark) +54 System.Type.GetType(String typeName, Boolean throwOnError, Boolean ignoreCase) +65 System.Web.Compilation.BuildManager.GetType(String typeName, Boolean throwOnError, Boolean ignoreCase) +69 System.Web.Configuration.HandlerFactoryCache.GetTypeWithAssert(String type) +38 System.Web.Configuration.HandlerFactoryCache.GetHandlerType(String type) +13 System.Web.Configuration.HandlerFactoryCache..ctor(String type) +19 System.Web.HttpApplication.GetFactory(String type) +81 System.Web.MaterializeHandlerExecutionStep.System.Web.HttpApplication.IExecutionStep.Execute() +223 System.Web.HttpApplication.ExecuteStep(IExecutionStep step, Boolean& completedSynchronously) +184 Version Information: Microsoft .NET Framework Version:4.0.30319; ASP.NET Version:4.0.30319.1 What’s really odd about this is that it crashes only if it runs inside of IIS (it works fine in Cassini) and only if ASP.NET Compatibility is enabled in web.config:<serviceHostingEnvironment aspNetCompatibilityEnabled="true" multipleSiteBindingsEnabled="true" /> Arrrgh!!!!! After some experimenting and some help from Glenn Block and his team mates I was able to track down the problem in ApplicationHost.config. Specifically the problem was that there were multiple *.svc mappings in the ApplicationHost.Config file and the older 2.0 runtime specific versions weren’t marked for the proper runtime. Because these handlers show up at the top of the list they execute first resulting in assembly load errors for the wrong version assembly. To fix this problem I ended up making a couple changes in applicationhost.config. On the machine level root’s Handler mappings I had an entry that looked like this:<add name="svc-Integrated" path="*.svc" verb="*" type="System.ServiceModel.Activation.HttpHandler, System.ServiceModel, Version=3.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" preCondition="integratedMode" /> and it needs to be changed to this:<add name="svc-Integrated" path="*.svc" verb="*" type="System.ServiceModel.Activation.HttpHandler, System.ServiceModel, Version=3.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" preCondition="integratedMode,runtimeVersionv2.0" />Notice the explicit runtime version assignment in the preCondition attribute which is key to keep ASP.NET 4.0 from executing that handler. The key here is that the runtime version needs to be set explicitly so that the various *.svc handlers don’t fire only in the order defined which in case of a .NET 4.0 app with the original setting would result in an incompatible version of System.ComponentModel to load.What was really hard to track this down is that even when looking in the debugger when launching the Web app, the AppDomain assembly loads showed System.ServiceModel V4.0 starting up just fine. Apparently the ASP.NET runtime load occurs at a different point and that’s when things break.So how did this break? According to the Microsoft folks it’s some older tools that got installed that change the default service handlers. There’s a blog entry that points at this problem with more detail:http://blogs.iis.net/webtopics/archive/2010/04/28/system-typeloadexception-for-system-servicemodel-activation-httpmodule-in-asp-net-4.aspxNote that I tried running aspnet_regiis and that did not fix the problem for me. I had to manually change the entries in applicationhost.config.   © Rick Strahl, West Wind Technologies, 2005-2011Posted in AJAX   ASP.NET  WCF  

    Read the article

  • Oracle Social Network Developer Challenge: TEAM Informatics

    - by Kellsey Ruppel
    Originally posted by Jake Kuramoto on The Apps Lab blog. Here comes another Oracle Social Network Developer Challenge entry, this one courtesy of TEAM Informatics (@teaminformatics). As their name suggests, their entry was a true team effort, featuring the work of Jon Chartrand, Deepthi Sanikommu, Dmitry Shtulman, Raghavendra Joshi, and Daniel Stitely with Wayne Boerger doing the presentation honors. Speaking of the presentation, Wayne’s laptop wouldn’t project onto the plasma we had in the OTN Lounge, but luckily, Noel (@noelportugal) had his iPad and VGA dongle in his backpack of goodies, so they were able to improvise by using the iPad camera to capture Wayne’s demo and project the video to the plasma. Code will find a way. Anyway, TEAM built Do Over, an integration with Atlassian’s JIRA, coincidentally something I’ve chatted with Rich (@rmanalan) about in the past. The basic idea is simple; integrate JIRA issues with Oracle Social Network to expand and centralize the conversation around issue resolution. In Dmitry’s words: We were able to put together a team on fairly short notice and, after batting a few ideas around, decided to pursue an integration with JIRA, an issue and project tracking tool used in-house at TEAM.  After getting to know WebCenter Social, we saw immediate benefits that a JIRA integration could bring, primarily due to the fact that JIRA only allows assignment of an issue to one person at a time.  Integrating Social would allow collaboration and issue resolution to happen right from the JIRA Issue interface. TEAM tackled a very common pain point among developers, i.e. including everyone who needs to be involved in issue resolution into a single thread. If you’ve ever fixed bugs or participated in that process, you’ll know that not everyone has access to the issue resolution system, which makes consolidating discussion time-consuming and fragmented. Why? Because we typically use email as the tool for collaboration. Oracle Social Network allows for all parties involved to work in a single, private and secure conversation, and through its RESTful Public API, information from external systems like JIRA can be brought in for context. TEAM only had time to address half the solution, but given more time, I’m sure they would have made the integration bidirectional, allowing for relevant commentary to be pushed back to JIRA, closing the loop. Here are some screenshot of their integration. #gallery-1 { margin: auto; } #gallery-1 .gallery-item { float: left; margin-top: 10px; text-align: center; width: 33%; } #gallery-1 img { border: 2px solid #cfcfcf; } #gallery-1 .gallery-caption { margin-left: 0; } When Oracle Social Network is released, TEAM will have something they use internally to work on issues, and maybe they’ll even productize their work and add it to the Atlassian Marketplace so that other JIRA users can benefit from the combination of Oracle Social Network and JIRA. Thanks to everyone at TEAM for participating in our challenge. We hope they had a good experience. Look for the details of the other entries this week. Be sure to check out a full recap from Dmitry over on the TEAM blog.

    Read the article

< Previous Page | 109 110 111 112 113 114 115 116 117 118 119 120  | Next Page >