Search Results

Search found 11960 results on 479 pages for 'virtual domains'.

Page 164/479 | < Previous Page | 160 161 162 163 164 165 166 167 168 169 170 171  | Next Page >

  • Making the most of next weeks SharePoint 2010 developer training

    - by Eric Nelson
    [you can still register if you are free on the afternoons of 9th to 11th – UK time] We have 50+ registrations with more coming in – which is fantastic. Please read on to make the most of the training. Background We have structured the training to make sure that you can still learn lots during the three days even if you do not have SharePoint 2010 installed. Additionally the course is based around a subset of the channel 9 training to allow you to easily dig deeper or look again at specific areas. Which means if you have zero time between now and next Wednesday then you are still good to go. But if you can do some pre-work you will likely get even more out of the three days. Step 1: Check out the topics and resources available on-demand The course is based around a subset of the channel 9 training to allow you to easily dig deeper or look again at specific areas. Take a lap around the SharePoint 2010 Training Course on Channel 9 Download the SharePoint Developer Training Kit Step 2: Use a pre-configured Virtual Machine which you can download (best start today – it is large!) Consider using the VM we created If you don't have access to SharePoint 2010. You will need a 64bit host OS and bare minimum of 4GB of RAM. 8GB recommended. Virtual PC can not be used with this VM – Virtual PC only supports 32bit guests. The 2010-7a Information Worker VM gives you everything you need to develop for SharePoint 2010. Watch the Video on how to use this VM Download the VM Remember you only need to download the “parts” for the 2010-7a VM. There are 3 subtly different ways of using this VM: Easiest is to follow the advice of the video and get yourself a host OS of Windows Server 2008 R2 with Hyper-V and simply use the VM Alternatively you can take the VHD and create a “Boot to VHD” if you have Windows 7 Ultimate or Enterprise Edition. This works really well – especially if you are already familiar with “Boot to VHD” (This post I did will help you get started) Or you can take the VHD and use an alternative VM tool such as VirtualBox if you have a different host OS. NB: This tends to involve some work to get everything running fine. Check out parts 1 to 3 from Rolly and if you go with Virtual Box use an IDE controller not SATA. SATA will blue screen. Note in the screenshot below I also converted the vhd to a vmdk. I used the FREE Starwind Converter to do this whilst I was fighting blue screens – not sure its necessary as VirtualBox does now work with VHDs. or Step 3 – Install SharePoint 2010 on a 64bit Windows 7 or Vista Host I haven’t tried this but it is now supported. Check out MSDN. Final notes: I am in the process of securing a number of hosted VMs for ISVs directly managed by my team. Your Architect Evangelist will have details once I have them! Else we can sort out on the Wed. Regrettably I am unable to give folks 1:1 support on any issues around Boot to VHD, 3rd party VM products etc. Related Links: Check you are fully plugged into the work of my team – have you done these simple steps including joining our new LinkedIn group?

    Read the article

  • DISA Cross Domain Enterprise Solutions on the NetBeans Platform

    - by Geertjan
    Bray 2.0 is a tool based on the NetBeans Platform that assists in creating valid Data Flow Configuration (DFC) files. The DFC Specification was developed to provide a standardized way for defining, validating, and approving data flows for use on cross-domain guarding solutions. A DFC document specifies key entities such as security domains, guards that facilitate data between security domains, data flows that describe how data travels between security domains, filters that transform and validate the data and more. Related info: http://www.disa.mil/Services/Information-Assurance/Cross-Domain-Solutions The Bray product is in development at Fulcrum IT (http://www.fulcrumco.com). The DFC Specification and Bray were developed in support of the US Department of Defense. Bray 2.0 marks the first release of Bray on the NetBeans Platform and utilizes a number of features that are core to the NetBeans Platform: Modular plugability. Bray consumers can integrate their own tools, file types, and more into the product with relative ease. Robust UI. The NetBeans Platform intuitive UI makes it easy to access and manipulate multiple aspects of a DFC. Explorer. The Explorer is a key component that makes the DFC XML easy to traverse, edit, and find errors. Context-sensitive help. JavaHelp can be readily integrated for the product as well as all the UI within. Editors. Any external file can be added to a DFC. Users can register their own editors or use the provided NetBeans editors to edit files. Printing. The NetBeans Platform Print API makes it easy to determine what should be printed and how.   A screenshot: Bray 2.0 provides a lot of key features in developing valid, robust DFC files:  XML validation. A DFC can be validated against the DFC schema specification. DFC Check List. An interactive, minimal guide for creating a complete DFC. Summary Window. The Summary Window functions like the Navigator in NetBeans IDE. The current "item of interest" is checked against various business rules and provides the ability to quickly find and fix errors. Change Log. Bray audits every change to a DFC and places them in a change log for users to peruse. Comments. Users can optionally add comments for other users to see. Digital signatures. DFC files can be digitally signed. A signature history and signature validation is provided in Bray. Pluggable security schemes. Bray ships with plain text and IC-ISM security schemes. If needed, users can integrate additional ones.  ...and more to come! New features for Bray are constantly in development including use of the NetBeans Visual Library, language support, and more. More screenshots:

    Read the article

  • Enterprise Manager in EPM 11.1.2.x...a game of hide and seek!

    - by THE
    Normal 0 21 false false false DE X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif";} guest article: Maurice Bauhahn: Users of Oracle Hyperion Enterprise Performance Management 11.1.2.0 and 11.1.2.1 may puzzle why the URL http://<servername>:7001/em may not conjure up Enterprise Manager Fusion Middleware Control. This powerful tool has been installed by default...but WebLogic may not have been 'Extended' to allow you to call it up (we are hopeful this ‘Extend’  step will not be needed with 11.1.2.2). The explanation is on pages 425 and following of the following document: http://www.oracle.com/technetwork/middleware/bi-foundation/epm-tips-issues-1-72-427329.pdf A close look at the screen dumps in that section reveals a somewhat scary prospect, however: the non-AdminServer servlets had all failed (see the red down-arrow icons to the right of their names) after the configuration! Of course you would want to avoid that scenario! A rephrasing of the instructions might help: Ensure the WebLogic AdminServer is not running (in a default scenario that would mean port 7001 is not active). Ensure you have logged into the computer as the installing owner of EPM. Since Enterprise Manager uses a LOT of resources, be sure that there is adequate free RAM to accommodate the added load. On the machine where WebLogic AdminServer is set up (typically the Foundation Services machine), run \Oracle\Middleware\wlserver_10.3\common\bin\config (config.sh on Unix). Select the 'Extend an existing WebLogic domain' option, and click the 'Next' button. Select the domain being used by EPM System. - Typically, the default domain is created under /Oracle/Middleware/user_projects/domains and is named EPMSystem. - Click 'Next'. Under 'Extend my domain automatically to support the following added products' - place a check mark before 'Oracle Enterprise Manager - 11.1.1.0 [oracle_common]' to select it. - Continue accepting the defaults by clicking 'Next' on each page until - on the last page you click 'Extend'. - The system will grind for a few minutes while it configures (deploys?) EM. - Start the AdminServer. Sometimes there is contention in the startup order of the various servlets (resulting in some not coming up). To avoid that problem on Microsoft Windows machines you may start and stop services via the following analogous command line commands to those run on Linux/Unix (these more carefully space out timings of these events): Ensure EPM is up:\Oracle\Middleware\user_projects\epmsystem1\bin\start.bat Ensure WebLogic is up:\Oracle\Middleware\user_projects\domains\EPMSystem\bin\startWebLogic.cmd Shut down WebLogic:\Oracle\Middleware\user_projects\domains\EPMSystem\bin\stopWebLogic.cmd Shut down EPM:\Oracle\Middleware\user_projects\epmsystem1\bin\stop.bat  Now you should be able to more successfully troubleshoot with the EM tool:

    Read the article

  • How to resize / enlarge / grow a non-LVM ext4 partition

    - by Mischa
    I have already searched the forums, but couldnt find a good suitable answer: I have an Ubuntu Server 10.04 as KVM Host and a guest system, that also runs 10.04. The host system uses LVM and there are three logical volumes, which are provided to the guest as virtual block devices - one for /, one for /home and one for swap. The guest had been partitioned without LVM. I have already enlarged the logical volume in the host system - the guest successfully sees the bigger virtual disk. However, this virtual disk contains one "good old" partition, which still has the old small size. The output of fdisk -l is me@produktion:/$ LC_ALL=en_US sudo fdisk -l Disk /dev/vda: 32.2 GB, 32212254720 bytes 255 heads, 63 sectors/track, 3916 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x000c8ce7 Device Boot Start End Blocks Id System /dev/vda1 * 1 3917 31455232 83 Linux Disk /dev/vdb: 2147 MB, 2147483648 bytes 244 heads, 47 sectors/track, 365 cylinders Units = cylinders of 11468 * 512 = 5871616 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x000f2bf7 Device Boot Start End Blocks Id System /dev/vdb1 1 366 2095104 82 Linux swap / Solaris Partition 1 has different physical/logical beginnings (non-Linux?): phys=(0, 32, 33) logical=(0, 43, 28) Partition 1 has different physical/logical endings: phys=(260, 243, 47) logical=(365, 136, 44) Disk /dev/vdc: 225.5 GB, 225485783040 bytes 255 heads, 63 sectors/track, 27413 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00027f25 Device Boot Start End Blocks Id System /dev/vdc1 1 9138 73398272 83 Linux The output of parted print all is Model: Virtio Block Device (virtblk) Disk /dev/vda: 32.2GB Sector size (logical/physical): 512B/512B Partition Table: msdos Number Start End Size Type File system Flags 1 1049kB 32.2GB 32.2GB primary ext4 boot Model: Virtio Block Device (virtblk) Disk /dev/vdb: 2147MB Sector size (logical/physical): 512B/512B Partition Table: msdos Number Start End Size Type File system Flags 1 1049kB 2146MB 2145MB primary linux-swap(v1) Model: Virtio Block Device (virtblk) Disk /dev/vdc: 225GB Sector size (logical/physical): 512B/512B Partition Table: msdos Number Start End Size Type File system Flags 1 1049kB 75.2GB 75.2GB primary ext4 What I want to achieve is to simply grow or resize the partition /dev/vdc1 so that it uses the whole space provided by the virtual block device /dev/vdc. The problem is, that when I try to do that with parted, it complains: (parted) select /dev/vdc Using /dev/vdc (parted) print Model: Virtio Block Device (virtblk) Disk /dev/vdc: 225GB Sector size (logical/physical): 512B/512B Partition Table: msdos Number Start End Size Type File system Flags 1 1049kB 75.2GB 75.2GB primary ext4 (parted) resize 1 WARNING: you are attempting to use parted to operate on (resize) a file system. parted's file system manipulation code is not as robust as what you'll find in dedicated, file-system-specific packages like e2fsprogs. We recommend you use parted only to manipulate partition tables, whenever possible. Support for performing most operations on most types of file systems will be removed in an upcoming release. Start? [1049kB]? End? [75.2GB]? 224GB Error: File system has an incompatible feature enabled. Compatible features are has_journal, dir_index, filetype, sparse_super and large_file. Use tune2fs or debugfs to remove features. So what can I do? This is a headless production system. What is a safe way to grow this partition? I CAN unmount it, though - so this is not the problem.

    Read the article

  • Rolling Along: PASS Board Year 2, Q2

    - by Denise McInerney
    Eighteen months into my time as a PASS Director I’m especially proud of what the Virtual Chapters have accomplished and want to share that progress with you. I'm also pleased that the organization has invested more resources to support the VCs. In this quarter I got to attend two conferences and meet more members of the SQL community. Virtual Chapters In the first six months of 2013 VCs have hosted more than 50 webinars, offering free technical education to over 6200 attendees. This is a great benefit to PASS members; thanks to the VC leaders, volunteers and speakers who contribute their time to produce these events. The Performance VC held their “Summer Performance Palooza”, an event featuring eight back-to-back sessions. Links to the session recordings can be found on the VCs web site. The new webinar platform, GoToWebinar, has been rolled out to all the VCs. This is a more stable, scalable platform and represents an important investment into the future of the VCs. A few new VCs are in the planning stages, including one focused on Security and one for Russian speakers. Visit the Virtual Chapter home page to sign up for the chapters that interest you. Each Virtual Chapter is offering a discount code for PASS Summit 2013. Be sure to ask your VC leader for the code to save $200 on Summit registration. 24 Hours of PASS The next 24HOP will be on July 31. This Summit Preview edition will feature 24 consecutive webcasts presented by experts who will be speaking at Summit in October. Registration for this free event is open now. And we will be using the GoToWebinar platform for 24HOP also. Business Analytics Conference April marked the first PASS Business Analytics Conference in Chicago. This introduced PASS to another segment of data professionals: the analysts and data scientists who work with the world’s growing collection of data. Overall the inaugural event was a success and gave us a glimpse into this increasingly important space. After Chicago the Board had several serious discussions about the lessons learned from this seven and what we should do next. We agreed to apply those lessons and continue to invest in this event; there will be a PASS Business Analytics Conference in 2014. I’m very pleased the next event will be in San Jose, CA, the heart of Silicon Valley, a place where a great deal of investment and innovation in data analytics is taking place. Global SQL Community Over the last couple of years PASS has been taking steps to become more relevant to SQL communities in different parts of the world. In May I had the opportunity to attend SQL Bits XI in Nottingham, England. It was enlightening to meet and talk with SQL professionals from around the U.K. as well as many other European countries. The many SQL Bits volunteers put on a great event and were gracious hosts. Budgets The Board passed the FY14 budget at the end of June. The  budget process can be challenging and requires the Board to make some difficult choices about where to allocate resources. Overall I’m satisfied with the decisions we made and think we are investing in the right activities and programs. Next Up The Board is meeting July 18-19 in Kansas City. We will be holding the Executive Committee election for the Exec Co that will take office in 2014. We will also be discussing plans for the next BA conference as well as the next steps for our Global Growth initiative. Applications for the upcoming Board of Directors election open on July 24. If you are considering running for the Board you can visit the PASS elections site to learn more about the election process. And I encourage anyone considering running to reach out to current and past Board members to learn about what the role entails. Plans for the next PASS Summit are in full swing. We are working on some fun new ideas to introduce attendees to the many ways to become involved in the SQL community.

    Read the article

  • ArchBeat Link-o-Rama Top 10 for October 7-13, 2012

    - by Bob Rhubart
    The Top 10 items shared via the OTN ArchBeat Facebook page for the week of October 7-13, 2012. OOW12: Oracle Business Process Management/Oracle ADF Integration Best Practices | Andrejus Baranovskis The Oracle OpenWorld presentations keep coming! Oracle ACE Director Andrejus Baranovskis shares the slides from "Oracle Business Process Management/Oracle ADF Integration Best Practices," co-presented with Danilo Schmiedel from Opitz Consulting. Oracle's Analytics, Engineered Systems, and Big Data Strategy | Mark Rittman Part 1 of 3 in Oracle ACE Director Mark Rittman's series on Oracle Exalytics, Oracle R Enterprise and Endeca. Adaptive ADF/WebCenter template for the iPad | Maiko Rocha Oracle Fusion Middleware A-Team member Maiko Rocha responds to a a customer request for information about how to create an adaptive iPad template for their WebCenter Portal application, "a specific template to streamline their workflow on the iPad." Following the Thread in OSB | Antony Reynolds Antony Reynolds recently led an Oracle Service Bus POC in which his team needed to get high throughput from an OSB pipeline. "Imagine our surprise when, on stressing the system, we saw it lock up, with large numbers of blocked threads." He shares the details of the problem and the solution in this extensive technical post. WebCenter Sites Gadget Development Concepts Quickstart | John Brunswick What are Gadgets? "At their most basic level they can be thought of as lightweight portlets that run largely on the client side of an architecture," says John Brunswick. "Gadgets provide a cross-platform container to run reusable UI modules that generally expose dynamic information to an end user, allowing for some level of end user customization." Oracle Fusion Middleware Security: OAM and OIM 11g Academies Looking for technical how-to content covering Oracle Access Manager and Oracle Identity Manager? The people behind the Oracle Middleware Security blog have indexed relevant blog posts into what they call Academies. "These indexes contain the articles we've written that we believe provide long lasting guidance on OAM and OIM. Posts covered in these series include articles on key aspects of OAM and OIM 11g, best practice architectural guidance, integrations, and customizations." Fusion Applications Technical Tips | Naveen Nahata "Setting memory parameters for Admin and Managed servers of various domains in Fusion Applications can be, let us say, a little daunting," says Oracle Fusion Middleware A-Team member Naveen Nahata. "While all this may look complicated and intimidating, it is actually relatively simple once you understand how it all works." Updated Agenda for OTN Architect Day Los Angeles (Oct 25) In less than two weeks Oracle Architect Day rolls into Los Angeles, with a full slate of sessions devoted to cloud computing, engineered systems, and SOA. Follow the link for the updated event agenda. ORCLville: OOW 2012 - A Not So Brief Recap Oracle ACE Director Floyd Teter, an Applications & Apps Technology specialists, shares his personal, frank, and and extensive recap or Oracle OpenWorld 2012. SOA Suite create partition in Enterprise Manager | Peter Paul van de Beek "In Oracle SOA Suite 10g, or more specific BPEL 10g, one could group functionality in domains," says Peter Paul van de Beek. "This feature has been away in the early versions of SOA Suite 11g. They have returned in more recent version and can be used for all SCA composites (instead of BPEL only). Nowadays these 10g domains are called partitions." Thought for the Day "I strive for an architecture from which nothing can be taken away." — Helmut Jahn Source: BrainyQuote.com

    Read the article

  • saslauthd + PostFix producing password verification and authentication errors

    - by Aram Papazian
    So I'm trying to setup PostFix while using SASL (Cyrus variety preferred, I was using dovecot earlier but I'm switching from dovecot to courier so I want to use cyrus instead of dovecot) but I seem to be having issues. Here are the errors I'm receiving: ==> mail.log <== Aug 10 05:11:49 crazyinsanoman postfix/smtpd[779]: warning: SASL authentication failure: Password verification failed Aug 10 05:11:49 crazyinsanoman postfix/smtpd[779]: warning: ipname[xx.xx.xx.xx]: SASL PLAIN authentication failed: authentication failure ==> mail.info <== Aug 10 05:11:49 crazyinsanoman postfix/smtpd[779]: warning: SASL authentication failure: Password verification failed Aug 10 05:11:49 crazyinsanoman postfix/smtpd[779]: warning: ipname[xx.xx.xx.xx]: SASL PLAIN authentication failed: authentication failure ==> mail.warn <== Aug 10 05:11:49 crazyinsanoman postfix/smtpd[779]: warning: SASL authentication failure: Password verification failed Aug 10 05:11:49 crazyinsanoman postfix/smtpd[779]: warning: ipname[xx.xx.xx.xx]: SASL PLAIN authentication failed: authentication failure I tried $testsaslauthd -u xxxx -p xxxx 0: OK "Success." So I know that the password/user I'm using is correct. I'm thinking that most likely I have a setting wrong somewhere, but can't seem to find where. Here is my files. Here is my main.cf for postfix: # See /usr/share/postfix/main.cf.dist for a commented, more complete version # Debian specific: Specifying a file name will cause the first # line of that file to be used as the name. The Debian default # is /etc/mailname. myorigin = /etc/mailname # This is already done in /etc/mailname #myhostname = crazyinsanoman.xxxxx.com smtpd_banner = $myhostname ESMTP $mail_name #biff = no # appending .domain is the MUA's job. #append_dot_mydomain = no readme_directory = /usr/share/doc/postfix # TLS parameters smtpd_tls_cert_file = /etc/postfix/smtpd.cert smtpd_tls_key_file = /etc/postfix/smtpd.key smtpd_use_tls = yes smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache # Relay smtp through another server or leave blank to do it yourself #relayhost = smtp.yourisp.com # Network details; Accept connections from anywhere, and only trust this machine mynetworks = 127.0.0.0/8 inet_interfaces = all #mynetworks_style = host #As we will be using virtual domains, these need to be empty local_recipient_maps = mydestination = # how long if undelivered before sending "delayed mail" warning update to sender delay_warning_time = 4h # will it be a permanent error or temporary unknown_local_recipient_reject_code = 450 # how long to keep message on queue before return as failed. # some have 3 days, I have 16 days as I am backup server for some people # whom go on holiday with their server switched off. maximal_queue_lifetime = 7d # max and min time in seconds between retries if connection failed minimal_backoff_time = 1000s maximal_backoff_time = 8000s # how long to wait when servers connect before receiving rest of data smtp_helo_timeout = 60s # how many address can be used in one message. # effective stopper to mass spammers, accidental copy in whole address list # but may restrict intentional mail shots. smtpd_recipient_limit = 16 # how many error before back off. smtpd_soft_error_limit = 3 # how many max errors before blocking it. smtpd_hard_error_limit = 12 # Requirements for the HELO statement smtpd_helo_restrictions = permit_mynetworks, warn_if_reject reject_non_fqdn_hostname, reject_invalid_hostname, permit # Requirements for the sender details smtpd_sender_restrictions = permit_mynetworks, warn_if_reject reject_non_fqdn_sender, reject_unknown_sender_domain, reject_unauth_pipelining, permit # Requirements for the connecting server smtpd_client_restrictions = reject_rbl_client sbl.spamhaus.org, reject_rbl_client blackholes.easynet.nl, reject_rbl_client dnsbl.njabl.org # Requirement for the recipient address smtpd_recipient_restrictions = reject_unauth_pipelining, permit_mynetworks, reject_non_fqdn_recipient, reject_unknown_recipient_domain, reject_unauth_destination, permit smtpd_data_restrictions = reject_unauth_pipelining # require proper helo at connections smtpd_helo_required = yes # waste spammers time before rejecting them smtpd_delay_reject = yes disable_vrfy_command = yes # not sure of the difference of the next two # but they are needed for local aliasing alias_maps = hash:/etc/postfix/aliases alias_database = hash:/etc/postfix/aliases # this specifies where the virtual mailbox folders will be located virtual_mailbox_base = /var/spool/mail/vmail # this is for the mailbox location for each user virtual_mailbox_maps = mysql:/etc/postfix/mysql_mailbox.cf # and this is for aliases virtual_alias_maps = mysql:/etc/postfix/mysql_alias.cf # and this is for domain lookups virtual_mailbox_domains = mysql:/etc/postfix/mysql_domains.cf # this is how to connect to the domains (all virtual, but the option is there) # not used yet # transport_maps = mysql:/etc/postfix/mysql_transport.cf # Setup the uid/gid of the owner of the mail files - static:5000 allows virtual ones virtual_uid_maps = static:5000 virtual_gid_maps = static:5000 inet_protocols=all # Cyrus SASL Support smtpd_sasl_path = smtpd smtpd_sasl_local_domain = xxxxx.com ####################### ## OLD CONFIGURATION ## ####################### #myorigin = /etc/mailname #mydestination = crazyinsanoman.xxxxx.com, localhost, localhost.localdomain #mailbox_size_limit = 0 #recipient_delimiter = + #html_directory = /usr/share/doc/postfix/html message_size_limit = 30720000 #virtual_alias_domains = ##virtual_alias_maps = hash:/etc/postfix/virtual #virtual_mailbox_base = /home/vmail ##luser_relay = webmaster #smtpd_sasl_type = dovecot #smtpd_sasl_path = private/auth smtpd_sasl_auth_enable = yes smtpd_sasl_security_options = noanonymous broken_sasl_auth_clients = yes #smtpd_sasl_authenticated_header = yes smtpd_recipient_restrictions = permit_mynetworks, permit_sasl_authenticated, reject_unauth_destination #virtual_create_maildirsize = yes #virtual_maildir_extended = yes #proxy_read_maps = $local_recipient_maps $mydestination $virtual_alias_maps $virtual_alias_domains $virtual_mailbox_maps $virtual_mailbox_domains $relay_recipient_maps $relay_domains $canonical_maps $sender_canonical_maps $recipient_canonical_maps $relocated_maps $transport_maps $mynetworks $virtual_mailbox_limit_maps #virtual_transport = dovecot #dovecot_destination_recipient_limit = 1 Here is my master.cf: # # Postfix master process configuration file. For details on the format # of the file, see the master(5) manual page (command: "man 5 master"). # # Do not forget to execute "postfix reload" after editing this file. # # ========================================================================== # service type private unpriv chroot wakeup maxproc command + args # (yes) (yes) (yes) (never) (100) # ========================================================================== smtp inet n - - - - smtpd submission inet n - - - - smtpd -o smtpd_tls_security_level=encrypt -o smtpd_sasl_auth_enable=yes -o smtpd_client_restrictions=permit_sasl_authenticated,reject # -o milter_macro_daemon_name=ORIGINATING #smtps inet n - - - - smtpd # -o smtpd_tls_wrappermode=yes # -o smtpd_sasl_auth_enable=yes # -o smtpd_client_restrictions=permit_sasl_authenticated,reject # -o milter_macro_daemon_name=ORIGINATING #628 inet n - - - - qmqpd pickup fifo n - - 60 1 pickup cleanup unix n - - - 0 cleanup qmgr fifo n - n 300 1 qmgr #qmgr fifo n - - 300 1 oqmgr tlsmgr unix - - - 1000? 1 tlsmgr rewrite unix - - - - - trivial-rewrite bounce unix - - - - 0 bounce defer unix - - - - 0 bounce trace unix - - - - 0 bounce verify unix - - - - 1 verify flush unix n - - 1000? 0 flush proxymap unix - - n - - proxymap proxywrite unix - - n - 1 proxymap smtp unix - - - - - smtp # When relaying mail as backup MX, disable fallback_relay to avoid MX loops relay unix - - - - - smtp -o smtp_fallback_relay= # -o smtp_helo_timeout=5 -o smtp_connect_timeout=5 showq unix n - - - - showq error unix - - - - - error retry unix - - - - - error discard unix - - - - - discard local unix - n n - - local virtual unix - n n - - virtual lmtp unix - - - - - lmtp anvil unix - - - - 1 anvil scache unix - - - - 1 scache # # ==================================================================== # Interfaces to non-Postfix software. Be sure to examine the manual # pages of the non-Postfix software to find out what options it wants. # # Many of the following services use the Postfix pipe(8) delivery # agent. See the pipe(8) man page for information about ${recipient} # and other message envelope options. # ==================================================================== # # maildrop. See the Postfix MAILDROP_README file for details. # Also specify in main.cf: maildrop_destination_recipient_limit=1 # maildrop unix - n n - - pipe flags=DRhu user=vmail argv=/usr/bin/maildrop -d ${recipient} # # ==================================================================== # # Recent Cyrus versions can use the existing "lmtp" master.cf entry. # # Specify in cyrus.conf: # lmtp cmd="lmtpd -a" listen="localhost:lmtp" proto=tcp4 # # Specify in main.cf one or more of the following: # mailbox_transport = lmtp:inet:localhost # virtual_transport = lmtp:inet:localhost # # ==================================================================== # # Cyrus 2.1.5 (Amos Gouaux) # Also specify in main.cf: cyrus_destination_recipient_limit=1 # cyrus unix - n n - - pipe user=cyrus argv=/cyrus/bin/deliver -e -r ${sender} -m ${extension} ${user} # # ==================================================================== # Old example of delivery via Cyrus. # #old-cyrus unix - n n - - pipe # flags=R user=cyrus argv=/cyrus/bin/deliver -e -m ${extension} ${user} # # ==================================================================== # # See the Postfix UUCP_README file for configuration details. # uucp unix - n n - - pipe flags=Fqhu user=uucp argv=uux -r -n -z -a$sender - $nexthop!rmail ($recipient) # # Other external delivery methods. # ifmail unix - n n - - pipe flags=F user=ftn argv=/usr/lib/ifmail/ifmail -r $nexthop ($recipient) bsmtp unix - n n - - pipe flags=Fq. user=bsmtp argv=/usr/lib/bsmtp/bsmtp -t$nexthop -f$sender $recipient scalemail-backend unix - n n - 2 pipe flags=R user=scalemail argv=/usr/lib/scalemail/bin/scalemail-store ${nexthop} ${user} ${extension} mailman unix - n n - - pipe flags=FR user=list argv=/usr/lib/mailman/bin/postfix-to-mailman.py ${nexthop} ${user} #dovecot unix - n n - - pipe # flags=DRhu user=vmail:vmail argv=/usr/lib/dovecot/deliver -d ${recipient} Here is what I'm using for /etc/postfix/sasl/smtpd.conf log_level: 7 pwcheck_method: saslauthd pwcheck_method: auxprop mech_list: PLAIN LOGIN CRAM-MD5 DIGEST-MD5 allow_plaintext: true auxprop_plugin: mysql sql_hostnames: 127.0.0.1 sql_user: xxxxx sql_passwd: xxxxx sql_database: maildb sql_select: select crypt from users where id = '%u' As you can see I'm trying to use mysql as my authentication method. The password in 'users' is set through the 'ENCRYPT()' function. I also followed the methods found in http://www.jimmy.co.at/weblog/?p=52 in order to redo /var/spool/postfix/var/run/saslauthd as that seems to be a lot of people's problems, but that didn't help at all. Also, here is my /etc/default/saslauthd START=yes DESC="SASL Authentication Daemon" NAME="saslauthd" # Which authentication mechanisms should saslauthd use? (default: pam) # # Available options in this Debian package: # getpwent -- use the getpwent() library function # kerberos5 -- use Kerberos 5 # pam -- use PAM # rimap -- use a remote IMAP server # shadow -- use the local shadow password file # sasldb -- use the local sasldb database file # ldap -- use LDAP (configuration is in /etc/saslauthd.conf) # # Only one option may be used at a time. See the saslauthd man page # for more information. # # Example: MECHANISMS="pam" MECHANISMS="pam" MECH_OPTIONS="" THREADS=5 OPTIONS="-c -m /var/spool/postfix/var/run/saslauthd -r" I had heard that potentially changing MECHANISM to MECHANISMS="mysql" but obviously that didn't help as is shown by the options listed above and also by trying it out anyway in case the documentation was outdated. So, I'm now at a loss... I have no idea where to go from here or what steps I need to do to get this working =/ Anyone have any ideas? EDIT: Here is the error that is coming from auth.log ... I don't know if this will help at all, but here you go: Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: sql auxprop plugin using mysql engine Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: sql plugin Parse the username [email protected] Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: sql plugin try and connect to a host Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: sql plugin trying to open db 'maildb' on host '127.0.0.1' Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: sql plugin Parse the username [email protected] Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: sql plugin try and connect to a host Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: sql plugin trying to open db 'maildb' on host '127.0.0.1' Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: begin transaction Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: sql plugin create statement from userPassword user xxxxxx.com Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: sql plugin doing query select crypt from users where id = '[email protected]'; Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: sql plugin create statement from cmusaslsecretPLAIN user xxxxxx.com Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: sql plugin doing query select crypt from users where id = '[email protected]'; Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: commit transaction Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: sql plugin Parse the username [email protected] Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: sql plugin try and connect to a host Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: sql plugin trying to open db 'maildb' on host '127.0.0.1' Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: sql plugin Parse the username [email protected] Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: sql plugin try and connect to a host Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: sql plugin trying to open db 'maildb' on host '127.0.0.1' Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: sql plugin Parse the username [email protected] Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: sql plugin try and connect to a host Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: sql plugin trying to open db 'maildb' on host '127.0.0.1' Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: sql plugin Parse the username [email protected] Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: sql plugin try and connect to a host Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: sql plugin trying to open db 'maildb' on host '127.0.0.1' Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: begin transaction Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: sql plugin create statement from userPassword user xxxxxx.com Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: sql plugin doing query select crypt from users where id = '[email protected]'; Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: sql plugin create statement from cmusaslsecretPLAIN user xxxxxx.com Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: sql plugin doing query select crypt from users where id = '[email protected]'; Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: commit transaction Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: sql plugin Parse the username [email protected] Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: sql plugin try and connect to a host Aug 11 17:19:56 crazyinsanoman postfix/smtpd[9503]: sql plugin trying to open db 'maildb' on host '127.0.0.1'

    Read the article

  • IIS HTTP Error 403.1 - Forbidden: Execute access is denied

    - by coxymla
    I have a ASP.NET 1.1 application running on IIS 6 / Windows Server 2003. It's our application, but we're trying to specifically replicate a customer's installation so the app folder has been copied entirely from their production server onto our test machine, and then we've created the Virtual Directory and Web Application for IIS manually. The problem I have is that when we access the app, we get the standard IIS security error message: The page cannot be displayed You have attempted to execute a CGI, ISAPI, or other executable program from a directory that does not allow programs to be executed. -------------------------------------------------------------------------------- Please try the following: •Contact the Web site administrator if you believe this directory should allow execute access. HTTP Error 403.1 - Forbidden: Execute access is denied. Internet Information Services (IIS) Now this is pretty standard, except as far as I can see it's not anything so simple. I have checked: IIS user has read access to the directory IIS user and Network Service users have read/write access to the Temporary ASP.NET Files folder Virtual directory is set to the correct version of ASP.NET ASP.NET 1.1 Web Service Extension is allowed Virtual directory has the correct mappings of file extensions and all verbs to the aspnet 1.1 DLL Virtual directory properties allow Scripts and Executables to be run Anonymous access is turned on and the username and password is correct What am I missing?

    Read the article

  • Maintaining ISAPI Rewrite Path with the ASP.NET tilde (~)

    - by Adam
    My team is upgrading from ASP.NET 3.5 to ASP.NET 4.0. We are currently using Helicon ISAPI Rewrite to map http://localhost/<account-name>/default.aspx to http://localhost/<virtual-directory>/default.aspx?AccountName=<account-name> where <account-name> is a query string variable and <virtual-directory> is a virtual directory (naturally). Before the upgrade the tilde (~) resolved to http://localhost/<account-name>/... (which I want it to do) and after the upgrade the tilde resolves to http://localhost/<virtual-directory>/... which results in an error because the <account-name> query string is required. I'd like to avoid going down the road of replacing everything with relative paths because there are several features in our system that use the entire URL instead of just the relative path. For what it's worth I'm using IIS7 in Windows 7, Visual Studio 2010 with ASP.NET 4.0 and the 64 bit Helicon ISAPI Rewrite. If I switch back to the ASP.NET 3.5 version then it still works fine (leading me to believe nothing changed in IIS unless it's within the 4.0 app pool - when I switch back and forth between 3.5 and 4.0 I have to change the app pool in IIS). Any ideas? Thanks in advance!

    Read the article

  • Why I am getting a Heap Corruption Error?

    - by vaidya.atul
    I am new to C++. I am getting HEAP CORRUPTION ERROR. Any help will be highly appreciated. Below is my code class CEntity { //some member variables CEntity(string section1,string section2); CEntity(); virtual ~CEntity(); //pure virtual function .. virtual CEntity* create()const =0; }; I derive CLine from CEntity as below class CLine:public CEntity { // Again some variables ... // Constructor and destructor CLine(string section1,string section2); CLine(); ~CLine(); CLine* Create() const; } // CLine Implementation CLine::CLine(string section1,string section2):CEntity(section1,section2){}; CLine::CLine(); CLine* CLine::create()const{return new CLine();} I have another class CReader which uses CLine object and populates it in a multimap as below class CReader { public: CReader(); ~CReader(); multimap<int,CEntity*>m_data_vs_entity; }; //CReader Implementation CReader::CReader() { m_data_vs_entity.clear(); }; CReader::~CReader() { multimap<int,CEntity*>::iterator iter; for(iter = m_data_vs_entity.begin();iter!=m_data_vs_entity.end();iter++) { CEntity* current_entity = iter->second; if(current_entity) delete current_entity; } m_data_vs_entity.clear(); } I am reading the data from a file and then populating the CLine Class.The map gets populated in a function of CReader class. Since CEntity has a virtual destructor, I hope the piece of code in CReader's destructor should work. In fact, it does work for small files but I get HEAP CORRUPTION ERROR while working with bigger files. If there is something fundamentally wrong, then, please help me find it, as I have been scratching my head for quit some time now. Thanks in advance and awaiting reply, Regards, Atul

    Read the article

  • BindAttribute, Exclude nested properties for complex types

    - by David Board
    I have a 'Stream' model: public class Stream { public int ID { get; set; } [Required] [StringLength(50, ErrorMessage = "Stream name cannot be longer than 50 characters.")] public string Name { get; set; } [Required] [DataType(DataType.Url)] public string URL { get; set; } [Required] [Display(Name="Service")] public int ServiceID { get; set; } public virtual Service Service { get; set; } public virtual ICollection<Event> Events { get; set; } public virtual ICollection<Monitor> Monitors { get; set; } public virtual ICollection<AlertRule> AlertRules { get; set; } } For the 'create' view for this model, I have made a view model to pass some additional information to the view: public class StreamCreateVM { public Stream Stream { get; set; } public SelectList ServicesList { get; set; } public int SelectedService { get; set; } } Here is my create post action: [HttpPost] [ValidateAntiForgeryToken] public ActionResult Create([Bind(Include="Stream, Stream.Name, Stream.ServiceID, SelectedService")] StreamCreateVM viewModel) { if (ModelState.IsValid) { db.Streams.Add(viewModel.Stream); db.SaveChanges(); return RedirectToAction("Index", "Service", new { id = viewModel.Stream.ServiceID }); } return View(viewModel); } Now, this all works, apart from the [Bind(Include="Stream, Stream.Name, Stream.ServiceID, SelectedService")] bit. I can't seem to Include or Exclude properties within a nested object.

    Read the article

  • When I add a database table to a DBML file via LINQ to SQL, I get a slew of compiler errors.

    - by Zian Choy
    Whenever I add a certain table to a DBML file via LINQ to SQL, I get 102 errors in my VB NET project. Some of the errors: Error 1 Attribute 'TableAttribute' cannot be applied multiple times. C:\Documents and Settings\zchoy\My Documents\Virtual EMS Deployment\Life And Death\Life And Death\ShearwaterEMS.designer.vb 74 2 EMS Reality Check Error 2 'emptyChangingEventArgs' is already declared as 'Private Shared emptyChangingEventArgs As System.ComponentModel.PropertyChangingEventArgs' in this class. C:\Documents and Settings\zchoy\My Documents\Virtual EMS Deployment\Life And Death\Life And Death\ShearwaterEMS.designer.vb 78 17 EMS Reality Check Error 3 '_GroupID' is already declared as 'Private _GroupID As Integer' in this class. C:\Documents and Settings\zchoy\My Documents\Virtual EMS Deployment\Life And Death\Life And Death\ShearwaterEMS.designer.vb 80 10 EMS Reality Check Error 4 '_ID' is already declared as 'Private _ID As Integer' in this class. C:\Documents and Settings\zchoy\My Documents\Virtual EMS Deployment\Life And Death\Life And Death\ShearwaterEMS.designer.vb 82 10 EMS Reality Check Any suggestions for getting the table to work with LINQ to SQL will be welcomed. The table's properties: Group ID ID (Primary Key) Contact Title UseGroupAddress InternationalFormat Address1 Address2 City State ZipCode Country Phone Fax EMailAddress Notes DateAdded AddedBy DateChanged ChangedBy Active ExternalReference ChangeCounter PhoneLabel FaxLabel

    Read the article

  • count on LINQ union

    - by brechtvhb
    I'm having this link statement: List<UserGroup> domains = UserRepository.Instance.UserIsAdminOf(currentUser.User_ID); query = (from doc in _db.Repository<Document>() join uug in _db.Repository<User_UserGroup>() on doc.DocumentFrom equals uug.User_ID where domains.Contains(uug.UserGroup) select doc) .Union(from doc in _db.Repository<Document>() join uug in _db.Repository<User_UserGroup>() on doc.DocumentTo equals uug.User_ID where domains.Contains(uug.UserGroup) select doc); Running this statement doesn't cause any problems. But when I want to count the resultset the query suddenly runs quite slow. totalRecords = query.Count(); The result of this query is : SELECT COUNT([t5].[DocumentID]) FROM ( SELECT [t4].[DocumentID], [t4].[DocumentFrom], [t4].[DocumentTo] FROM ( SELECT [t0].[DocumentID], [t0].[DocumentFrom], [t0].[DocumentTo FROM [dbo].[Document] AS [t0] INNER JOIN [dbo].[User_UserGroup] AS [t1] ON [t0].[DocumentFrom] = [t1].[User_ID] WHERE ([t1].[UserGroupID] = 2) OR ([t1].[UserGroupID] = 3) OR ([t1].[UserGroupID] = 6) UNION SELECT [t2].[DocumentID], [t2].[DocumentFrom], [t2].[DocumentTo] FROM [dbo].[Document] AS [t2] INNER JOIN [dbo].[User_UserGroup] AS [t3] ON [t2].[DocumentTo] = [t3].[User_ID] WHERE ([t3].[UserGroupID] = 2) OR ([t3].[UserGroupID] = 3) OR ([t3].[UserGroupID] = 6) ) AS [t4] ) AS [t5] Can anyone help me to improve the speed of the count query? Thanks in advance!

    Read the article

  • Extend base type and automatically update audit information on Entity

    - by Nix
    I have an entity model that has audit information on every table (50+ tables) CreateDate CreateUser UpdateDate UpdateUser Currently we are programatically updating audit information. Ex: if(changed){ entity.UpdatedOn = DateTime.Now; entity.UpdatedBy = Environment.UserName; context.SaveChanges(); } But I am looking for a more automated solution. During save changes, if an entity is created/updated I would like to automatically update these fields before sending them to the database for storage. Any suggestion on how i could do this? I would prefer to not do any reflection, so using a text template is not out of the question. A solution has been proposed to override SaveChanges and do it there, but in order to achieve this i would either have to use reflection (in which I don't want to do ) or derive a base class. Assuming i go down this route how would I achieve this? For example EXAMPLE_DB_TABLE CODE NAME --Audit Tables CREATE_DATE CREATE_USER UPDATE_DATE UPDATE_USER And if i create a base class public abstract class IUpdatable{ public virtual DateTime CreateDate {set;} public virtual string CreateUser { set;} public virtual DateTime UpdateDate { set;} public virtual string UpdateUser { set;} } The end goal is to be able to do something like... public overrride void SaveChanges(){ //Go through state manager and update audit infromation //FOREACH changed entity in state manager if(entity is IUpdatable){ //If state is created... update create audit. //if state is updated... update update audit } } But I am not sure how I go about generating the code that would extend the interface.

    Read the article

  • IoC & Interfaces Best Practices

    - by n8wrl
    I'm experimenting with IoC on my way to TDD by fiddling with an existing project. In a nutshell, my question is this: what are the best practices around IoC when public and non-public methods are of interest? There are two classes: public abstract class ThisThingBase { public virtual void Method1() {} public virtual void Method2() {} public ThatThing GetThat() { return new ThatThing(this); } internal virtual void Method3() {} internal virtual void Method4() {} } public class Thathing { public ThatThing(ThisThingBase thing) { m_thing = thing; } ... } ThatThing does some stuff using its ThisThingBase reference to call methods that are often overloaded by descendents of ThisThingBase. Method1 and Method2 are public. Method3 and Method4 are internal and only used by ThatThings. I would like to test ThatThing without ThisThing and vice-versa. Studying up on IoC my first thought was that I should define an IThing interface, implement it by ThisThingBase and pass it to the ThatThing constructor. IThing would be the public interface clients could call but it doesn't include Method3 or Method4 that ThatThing also needs. Should I define a 2nd interface - IThingInternal maybe - for those two methods and pass BOTH interfaces to ThatThing?

    Read the article

  • Why Finalize method not allowed to override

    - by somaraj
    I am new to .net ..and i am confused with the destructor mechanism in C# ..please clarify In C# destructors are converted to finalize method by CLR. If we try to override it (not using destructor ) , will get an error Error 2 Do not override object.Finalize. Instead, provide a destructor. But it seems that the Object calss implementation in mscorlib.dll has finalize defined as protected override void Finalize(){} , then why cant we override it , that what virtual function for . Why is the design like that , is it to be consistent with c++ destructor concept. Also when we go to the definition of the object class , there is no mention of the finalize method , then how does the hmscorlib.dll definition shows the finalize funtion . Does it mean that the default destructor is converted to finalize method. public class Object { public Object(); public virtual bool Equals(object obj); public static bool Equals(object objA, object objB); public virtual int GetHashCode(); public Type GetType(); protected object MemberwiseClone(); public static bool ReferenceEquals(object objA, object objB); public virtual string ToString(); }

    Read the article

  • C++ LNK2019 error with constructors and destructors in derived classes

    - by BLH
    I have two classes, one inherited from the other. When I compile, I get the following errors: 1Entity.obj : error LNK2019: unresolved external symbol "public: __thiscall Utility::Parsables::Base::Base(void)" (??0Base@Parsables@Utility@@QAE@XZ) referenced in function "public: __thiscall Utility::Parsables::Entity::Entity(void)" (??0Entity@Parsables@Utility@@QAE@XZ) 1Entity.obj : error LNK2019: unresolved external symbol "public: virtual __thiscall Utility::Parsables::Base::~Base(void)" (??1Base@Parsables@Utility@@UAE@XZ) referenced in function "public: virtual __thiscall Utility::Parsables::Entity::~Entity(void)" (??1Entity@Parsables@Utility@@UAE@XZ) 1D:\Programming\Projects\Caffeine\Debug\Caffeine.exe : fatal error LNK1120: 2 unresolved externals I really can't figure out what's going on.. can anyone see what I'm doing wrong? I'm using Visual C++ Express 2008. Here are the files.. "include/Utility/Parsables/Base.hpp" #ifndef CAFFEINE_UTILITY_PARSABLES_BASE_HPP #define CAFFEINE_UTILITY_PARSABLES_BASE_HPP namespace Utility { namespace Parsables { class Base { public: Base( void ); virtual ~Base( void ); }; } } #endif //CAFFEINE_UTILITY_PARSABLES_BASE_HPP "src/Utility/Parsables/Base.cpp" #include "Utility/Parsables/Base.hpp" namespace Utility { namespace Parsables { Base::Base( void ) { } Base::~Base( void ) { } } } "include/Utility/Parsables/Entity.hpp" #ifndef CAFFEINE_UTILITY_PARSABLES_ENTITY_HPP #define CAFFEINE_UTILITY_PARSABLES_ENTITY_HPP #include "Utility/Parsables/Base.hpp" namespace Utility { namespace Parsables { class Entity : public Base { public: Entity( void ); virtual ~Entity( void ); }; } } #endif //CAFFEINE_UTILITY_PARSABLES_ENTITY_HPP "src/Utility/Parsables/Entity.cpp" #include "Utility/Parsables/Entity.hpp" namespace Utility { namespace Parsables { Entity::Entity( void ) { } Entity::~Entity( void ) { } } }

    Read the article

  • WHMCS - Mapping of a Manually Created Invoice with its Corresponding Domain

    - by Knowledge Craving
    I am using WHMCS version 4.2.1 for maintaining domain registration & website hosting services, in one of my websites. Currently, the automatic domain registration process is working great for both the existing & new clients. The main way to register domains automatically is to mark the respective invoices as paid, and automatically the domains get registered through my selected registrar. Recently, I am facing a major problem in domain renewals, where the domain has been already registered by us in the past. The problem is that some of the corresponding invoices are not already generated for the domains & so I have to manually create invoice for each of those domain renewals. However, I am unable to map that invoice with the corresponding domain. This is required because unless the domain knows that for its renewal, an invoice has been created, the marking of the invoice as paid will not instantiate the automatic renewal process through my registrar. Can anybody please tell me the probable way of mapping the invoice with its corresponding domain? I've tried to explain the problem as it is occurring in the best way possible. Still, if any more information regarding this is required, please ask. Any help is greatly appreciated.

    Read the article

  • Entity Framework 4 overwrite Equals and GetHashCode of an own class property

    - by Zhok
    Hi, I’m using Visual Studio 2010 with .NET 4 and Entity Framework 4. I’m working with POCO Classes and not the EF4 Generator. I need to overwrite the Equals() and GetHashCode() Method but that doesn’t really work. Thought it’s something everybody does but I don’t find anything about the problem Online. When I write my own Classes and Equals Method, I use Equals() of property’s, witch need to be loaded by EF to be filled. Like this: public class Item { public virtual int Id { get; set; } public virtual String Name { get; set; } public virtual List<UserItem> UserItems { get; set; } public virtual ItemType ItemType { get; set; } public override bool Equals(object obj) { Item item = obj as Item; if (obj == null) { return false; } return item.Name.Equals(this.Name) && item.ItemType.Equals(this.ItemType); } public override int GetHashCode() { return this.Name.GetHashCode() ^ this.ItemType.GetHashCode(); } } That Code doesn’t work, the problems are in Equals and GetHashCode where I try to get HashCode or Equal from “ItemType” . Every time I get a NullRefernceException if I try to get data by Linq2Entites. A dirty way to fix it, is to capture the NullReferenceException and return false (by Equals) and return base.GetHashCode() (by GethashCode) but I hope there is a better way to fix this problem. I’ve wrote a little test project, with SQL Script for the DB and POCO Domain, EDMX File and Console Test Main Method. You can download it here: Download

    Read the article

  • Fluent NHibernate mapping List<Point> as value to single column

    - by Paja
    I have this class: public class MyEntity { public virtual int Id { get; set; } public virtual List<Point> Vectors { get; set; } } How can I map the Vectors in Fluent NHibernate to a single column (as value)? I was thinking of this: public class Vectors : ISerializable { public List<Point> Vectors { get; set; } /* Here goes ISerializable implementation */ } public class MyEntity { public virtual int Id { get; set; } public virtual Vectors Vectors { get; set; } } Is it possible to map the Vectors like this, hoping that Fluent NHibernate will initialize Vectors class as standard ISerializable? Or how else could I map List<Point> to a single column? I guess I will have to write the serialization/deserialization routines myself, which is not a problem, I just need to tell FNH to use those routines. I guess I should use IUserType or ICompositeUserType, but I have no idea how to implement them, and how to tell FNH to cooperate.

    Read the article

  • firefox lead dot in cookie issue

    - by Jon
    Hi all, We are having an annoying issue with Firefox and cookies. We have the following domains: sub1.mydomain.com sub2.mydomain.com sub3.mydomain.com otherdomain.com We have converting our framework to be multilingual and providing a drop down to change the language at any point during site. The code base is shared across all the domains above. We can not set a cookie across all "mydomain.com" sites, they have to be on each of the sub domains. To get this to work we set a JavaScript cookie when the users chooses a new language. When the page posts back to the server the code picks this up and sets the users preferences to that new language code, (this is all C# and ASP.NET). We have to set the host to be "subX.mydomain.com" and the path to be "/" in the cookie so that it is just for the subdomain and all parts of that domain. This works great on all browsers apart from FireFox. It seems that firefox will pre append a DOT to the beginning of domain so ".subX.mydomain.com". When the code posts back with FireFox the cookie is always null. Has anyone had this situation, (I imagine it is not al that uncommon). I have read a lot of people saying, remove the domain from the cookie, but that can not work for us as we have multiple subdomains that need their own cookie values. Thanks

    Read the article

  • FluentNHibernate, getting 1 column from another table

    - by puffpio
    We're using FluentNHibernate and we have run into a problem where our object model requires data from two tables like so: public class MyModel { public virtual int Id { get; set; } public virtual string Name { get; set; } public virtual int FooId { get; set; } public virtual string FooName { get; set; } } Where there is a MyModel table that has Id, Name, and FooId as a foreign key into the Foo table. The Foo tables contains Id and FooName. This problem is very similar to another post here: http://stackoverflow.com/questions/1896645/nhibernate-join-tables-and-get-single-column-from-other-table but I am trying to figure out how to do it with FluentNHibernate. I can make the Id, Name, and FooId very easily..but mapping FooName I am having trouble with. This is my class map: public class MyModelClassMap : ClassMap<MyModel> { public MyModelClassMap() { this.Id(a => a.Id).Column("AccountId").GeneratedBy.Identity(); this.Map(a => a.Name); this.Map(a => a.FooId); // my attempt to map FooName but it doesn't work this.Join("Foo", join => join.KeyColumn("FooId").Map(a => a.FooName)); } } with that mapping I get this error: The element 'class' in namespace 'urn:nhibernate-mapping-2.2' has invalid child element 'join' in namespace 'urn:nhibernate-mapping-2.2'. List of possible elements expected: 'joined-subclass, loader, sql-insert, sql-update, sql-delete, filter, resultset, query, sql-query' in namespace 'urn:nhibernate-mapping-2.2'. any ideas?

    Read the article

  • LNK2001 error when compiling windows forms application with VC++ 2008

    - by Blin
    I've been trying to write a small application which will work with mysql in C++. I am using MySQL server 5.1.41 and MySQL C++ connector 1.0.5. Everything compiles fine when i write console applications, but when i try to compile windows forms application exactly the same way (same libraries, same paths, same project properties) i get this errors: Error 1 error LNK2001: unresolved external symbol "public: virtual int __clrcall sql::mysql::MySQL_Savepoint::getSavepointId(void)" (?getSavepointId@MySQL_Savepoint@mysql@sql@@$$FUAMHXZ) test1.obj test1 Error 2 error LNK2001: unresolved external symbol "public: virtual class std::basic_string<char,struct std::char_traits<char>,class std::allocator<char> > __clrcall sql::mysql::MySQL_Savepoint::getSavepointName(void)" (?getSavepointName@MySQL_Savepoint@mysql@sql@@$$FUAM?AV?$basic_string@DU?$char_traits@D@std@@V?$allocator@D@2@@std@@XZ) test1.obj test1 following instructions from here, i've got this: Undecoration of :- "?getSavepointId@MySQL_Savepoint@mysql@sql@@UEAAHXZ" is :- "public: virtual int __cdecl sql::mysql::MySQL_Savepoint::getSavepointId(void) __ptr64" Undecoration of :- "?getSavepointName@MySQL_Savepoint@mysql@sql@@UEAA?AV?$basic_string@DU?$char_traits@D@std@@V?$allocator@D@2@@std@@XZ" is :- "public: virtual class std::basic_string<char,struct std::char_traits<char>,class std::allocator<char> > __cdecl sql::mysql::MySQL_Savepoint::getSavepointName(void) __ptr64" but what should i do now?

    Read the article

  • Which is the better C# class design for dealing with read+write versus readonly

    - by DanM
    I'm contemplating two different class designs for handling a situation where some repositories are read-only while others are read-write. (I don't foresee any need to a write-only repository.) Class Design 1 -- provide all functionality in a base class, then expose applicable functionality publicly in sub classes public abstract class RepositoryBase { protected virtual void SelectBase() { // implementation... } protected virtual void InsertBase() { // implementation... } protected virtual void UpdateBase() { // implementation... } protected virtual void DeleteBase() { // implementation... } } public class ReadOnlyRepository : RepositoryBase { public void Select() { SelectBase(); } } public class ReadWriteRepository : RepositoryBase { public void Select() { SelectBase(); } public void Insert() { InsertBase(); } public void Update() { UpdateBase(); } public void Delete() { DeleteBase(); } } Class Design 2 - read-write class inherits from read-only class public class ReadOnlyRepository { public void Select() { // implementation... } } public class ReadWriteRepository : ReadOnlyRepository { public void Insert() { // implementation... } public void Update() { // implementation... } public void Delete() { // implementation... } } Is one of these designs clearly stronger than the other? If so, which one and why? P.S. If this sounds like a homework question, it's not, but feel free to use it as one if you want :)

    Read the article

  • Nhibernate - getting single column from other table

    - by Muhammad Akhtar
    I have following tables Employee: ID,CompanyID,Name //CompanyID is foriegn key of Company Table Company: CompanyID, Name I want to map this to the following class: public class Employee { public virtual Int ID { get; set; } public virtual Int CompanyID { get; set; } public virtual string Name { get; set; } public virtual string CompanyName { get; set; } protected Employee() { } } here is my xml class <class name="Employee" table="Employee" lazy="true"> <id name="Id" type="Int32" column="Id"> <generator class="native" /> </id> <property name="CompanyID" column="CompanyID" type="Int32" not-null="false"/> <property name="Name" column="Name" type="String" length="100" not-null="false"/> What I need to add in xml class to map CompanyName in my result? here is my code... public ArrayList getTest() { ISession session = NHibernateHelper.GetCurrentSession(); string query = "select Employee.*,(Company.Name)CompanyName from Employee inner join Employee on Employee.CompanyID = Company.CompanyID"; ArrayList document = (ArrayList)session.CreateSQLQuery(query, "Employee", typeof(Document)).List(); return document; } but in the returned result, I am getting CompanyName is null is result set and other columns are fine. Note:In DB, tables don't physical relation Please suggest my solution ------ Thanks

    Read the article

< Previous Page | 160 161 162 163 164 165 166 167 168 169 170 171  | Next Page >