Search Results

Search found 899 results on 36 pages for 'ef 5 0'.

Page 28/36 | < Previous Page | 24 25 26 27 28 29 30 31 32 33 34 35  | Next Page >

  • Mysql, SSL and java client problem

    - by CarlosH
    I'm trying to connect to an SSL-enabled mysql server from my own java application. After setting up ssl on mysqld, and successfuly tested an account using "REQUIRE ISSUER and SUBJECT", I wanted to use that account in a java app. I've generated a private key (to a file called keystore.jks) and csr using keytool, and signed the csr using my own CA(The same used with mysqld and its certificate). Once signed the csr, I've imported the CA and client cert into the keystore.jks file. When running the application the SSL connection can't be established. Relevant logs: ... [Raw read]: length = 5 0000: 16 00 00 02 FF ..... main, handling exception: javax.net.ssl.SSLException: Unsupported record version Unknown-0.0 main, SEND TLSv1 ALERT: fatal, description = unexpected_message Padded plaintext before ENCRYPTION: len = 32 0000: 02 0A BE 0F AD 64 0E 9A 32 3B FE 76 EF 40 A4 C9 .....d..2;.v.@.. 0010: B4 A7 F3 25 E7 E5 09 09 09 09 09 09 09 09 09 09 ...%............ main, WRITE: TLSv1 Alert, length = 32 [Raw write]: length = 37 0000: 15 03 01 00 20 AB 41 9E 37 F4 B8 44 A7 FD 91 B1 .... .A.7..D.... 0010: 75 5A 42 C6 70 BF D4 DC EC 83 01 0C CF 64 C7 36 uZB.p........d.6 0020: 2F 69 EC D2 7F /i... main, called closeSocket() main, called close() main, called closeInternal(true) main, called close() main, called closeInternal(true) connection error com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: Communications link failure Any idea why is this happening?

    Read the article

  • Cannot change PostgreSQL port

    - by Jerec TheSith
    I run Postgresql 8.4 as a service on a CentOS 6.2 server. I set port = 21444 and listen_addresses = '*' in /var/lib/pgsql/data/postgresql.conf and I changed 5432 to 21444 in postmaster.opts and restarted postgres, but when I run netstat -lntp postgresql is still running on port 5432 tcp 0 0 0.0.0.0:5432 0.0.0.0:* LISTEN 20276/postmaster When I restart postgresql I get a writting error warning on /proc/self/oom_adj, but the service starts anyway. I read that we could get this error when using virtualized servers, but I don't really know if this has inpact on postgresql listening port. The correct pgsql config file is loaded in /var/lib/pgsql/data : [root@srv02 ~]# ps -ef | grep postgres root 1358 22140 0 09:42 pts/0 00:00:00 grep postgres postgres 9519 1 0 Mar16 ? 00:00:01 /usr/bin/postmaster -p 5432 -D /var/lib/pgsql/data postgres 9573 9519 0 Mar16 ? 00:00:00 postgres: logger process postgres 9575 9519 0 Mar16 ? 00:00:05 postgres: writer process postgres 9576 9519 0 Mar16 ? 00:00:03 postgres: wal writer process postgres 9577 9519 0 Mar16 ? 00:00:01 postgres: autovacuum launcher process postgres 9578 9519 0 Mar16 ? 00:00:01 postgres: stats collector process any thought ? thanks, Jerec

    Read the article

  • Prevent outgoing traffic unless OpenVPN connection is active using pf.conf on Mac OS X

    - by Nick
    I've been able to deny all connections to external networks unless my OpenVPN connection is active using pf.conf. However, I lose Wi-Fi connectivity if the connection is broken by closing and opening the laptop lid or toggling Wi-Fi off and on again. I'm on Mac OS 10.8.1. I connect to the Web via Wi-Fi (from varying locations, including Internet cafés). The OpenVPN connection is set up with Viscosity. I have the following packet filter rules set up in /etc/pf.conf # Deny all packets unless they pass through the OpenVPN connection wifi=en1 vpn=tun0 block all set skip on lo pass on $wifi proto udp to [OpenVPN server IP address] port 443 pass on $vpn I start the packet filter service with sudo pfctl -e and load the new rules with sudo pfctl -f /etc/pf.conf. I have also edited /System/Library/LaunchDaemons/com.apple.pfctl.plist and changed the line <string>-f</string> to read <string>-ef</string> so that the packet filter launches at system startup. This all seems to works great at first: applications can only connect to the web if the OpenVPN connection is active, so I'm never leaking data over an insecure connection. But, if I close and reopen my laptop lid or turn Wi-Fi off and on again, the Wi-Fi connection is lost, and I see an exclamation mark in the Wi-Fi icon in the status bar. Clicking the Wi-Fi icon shows an "Alert: No Internet connection" message: To regain the connection, I have to disconnect and reconnect Wi-Fi, sometimes five or six times, before the "Alert: No Internet connection" message disappears and I'm able to open the VPN connection again. Other times, the Wi-Fi alert disappears of its own accord, the exclamation mark clears, and I'm able to connect again. Either way, it can take five minutes or more to get a connection again, which can be frustrating. Why does Wi-Fi report "No internet connection" after losing connectivity, and how can I diagnose this issue and fix it?

    Read the article

  • What is the difference between the Linux and Linux LVM partition type?

    - by ujjain
    Fdisk shows multiple partition types. What is the difference between choosing 83) Linux and 8e) Linux LVM? Choosing 83) Linux also works fine for using LVM, even creating a physical volume on /dev/sdb without a partition table works. Does picking a partition type in fdisk really matter? What is the difference in picking Linux or Linux LVM as partition type? [root@tst-01 ~]# fdisk /dev/sdb WARNING: DOS-compatible mode is deprecated. It's strongly recommended to switch off the mode (command 'c') and change display units to sectors (command 'u'). Command (m for help): l 0 Empty 24 NEC DOS 81 Minix / old Lin bf Solaris 1 FAT12 39 Plan 9 82 Linux swap / So c1 DRDOS/sec (FAT- 2 XENIX root 3c PartitionMagic 83 Linux c4 DRDOS/sec (FAT- 3 XENIX usr 40 Venix 80286 84 OS/2 hidden C: c6 DRDOS/sec (FAT- 4 FAT16 <32M 41 PPC PReP Boot 85 Linux extended c7 Syrinx 5 Extended 42 SFS 86 NTFS volume set da Non-FS data 6 FAT16 4d QNX4.x 87 NTFS volume set db CP/M / CTOS / . 7 HPFS/NTFS 4e QNX4.x 2nd part 88 Linux plaintext de Dell Utility 8 AIX 4f QNX4.x 3rd part 8e Linux LVM df BootIt 9 AIX bootable 50 OnTrack DM 93 Amoeba e1 DOS access a OS/2 Boot Manag 51 OnTrack DM6 Aux 94 Amoeba BBT e3 DOS R/O b W95 FAT32 52 CP/M 9f BSD/OS e4 SpeedStor c W95 FAT32 (LBA) 53 OnTrack DM6 Aux a0 IBM Thinkpad hi eb BeOS fs e W95 FAT16 (LBA) 54 OnTrackDM6 a5 FreeBSD ee GPT f W95 Ext'd (LBA) 55 EZ-Drive a6 OpenBSD ef EFI (FAT-12/16/ 10 OPUS 56 Golden Bow a7 NeXTSTEP f0 Linux/PA-RISC b 11 Hidden FAT12 5c Priam Edisk a8 Darwin UFS f1 SpeedStor 12 Compaq diagnost 61 SpeedStor a9 NetBSD f4 SpeedStor 14 Hidden FAT16 <3 63 GNU HURD or Sys ab Darwin boot f2 DOS secondary 16 Hidden FAT16 64 Novell Netware af HFS / HFS+ fb VMware VMFS 17 Hidden HPFS/NTF 65 Novell Netware b7 BSDI fs fc VMware VMKCORE 18 AST SmartSleep 70 DiskSecure Mult b8 BSDI swap fd Linux raid auto 1b Hidden W95 FAT3 75 PC/IX bb Boot Wizard hid fe LANstep 1c Hidden W95 FAT3 80 Old Minix be Solaris boot ff BBT 1e Hidden W95 FAT1 Command (m for help):

    Read the article

  • Send Apache Access Logs to syslog

    - by Seer
    We have IBM HTTP Servers (Based on Apache 2.0) and want to send the access logs to syslog. (in addition to error logs which does work) The config we are using is as follows: ErrorLog "|/HTTPServer/bin/rotatelogs /archive/http/error_log.%Y%m%d 86400 | /usr/bin/logger -t httpd -plocal6.err" LogLevel warn LogFormat "%h %{True-Client-IP}i %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\" %D \"%{Host}i\" %v" combined LogFormat "%h %l %u %t \"%r\" %>s %b" common LogFormat "%{Referer}i -> %U" referer LogFormat "%{User-agent}i" agent CustomLog "|exec /usr/bin/logger -t ptseelm-ax3004 -i -p local6.notice" combined But the logs entries don't even appear in the local syslog.out here is what the processes look like: ps -ef | grep httpd apache 6226000 8388618 0 09:04:01 - 0:00 /HTTPServer/bin/httpd -d /HTTPServer -k start apache 6750220 8388618 0 09:04:01 - 0:00 /HTTPServer/bin/httpd -d /HTTPServer -k start apache 7602390 8388618 0 09:04:01 - 0:00 /HTTPServer/bin/httpd -d /HTTPServer -k start root 8388618 1 0 09:04:01 - 0:00 /HTTPServer/bin/httpd -d /HTTPServer -k start root 9044038 8388618 0 09:04:01 - 0:00 /usr/bin/logger -t httpd -plocal6.err So there is no logger attached to the child processes... is that the problem? Can someone help me out? :) We have the following in syslog.conf: local6.* @somerealipaddress

    Read the article

  • Website hosting from home - IIS6

    - by Paul
    I'm wanting to host a few websites from home, primarily because I'm using some BETA Microsoft software (.NET 4 and EF) and don't want to install it on my production server which is hosted at eukhost.com. Basically, I'm completely new to this sort of thing. So far, here is what I've done: Registered the domain name at namecheap.com (let's call it mydomain.com) Gone to "Nameserver Registration" in the panel and entered my IP address for the NS1 and NS2 records (let's say the IP is 0.0.0.0). Gone to "Domain Name Server Setup" and entered ns1.mydomain.com & ns2.mydomain.com Forwarded requests from port 80 to my internal IP (let's say 192.168.1.254) Created the website in IIS (I'm just testing with a single website so far, so have not created any host header values) Now, if I type in the IP address (http://0.0.0.0) I get the site as expected. However, if I enter http://www.mydomain.com I get an error saying "DNS Error - Cannot find server". I'm aware that there is a service from DynDNS that will automatically change the IP if I have a dynamic address, however my IP has remained static since I installed the ISP (since October) so I don't need this. Is there any way that I can get the DNS to work just by configuring IIS or something in Windows? I don't really want to have to pay for any 3rd party service. Thanks,

    Read the article

  • Solr startup script problem

    - by Camran
    I have installed solr and it works finally... I have now problems setting it up to start automatically with a start command. I have followed a tutorial and created a file called solr in the /etc/init.d/solr dir... Here is that file: #!/bin/sh -e # SOLR auto-start # # description: auto-starts solr engine # processname: solr-production # pidfile: /var/run/solr-production.pid NAME="solr" PIDFILE="/var/run/solr-production.pid" LOG_FILE="/var/log/solr-production.log" SOLR_DIR="/etc/jetty" JAVA_OPTIONS="-Xmx1024m -DSTOP.PORT=8079 -DSTOP.KEY=stopkey -jar start.jar" JAVA="/usr/bin/java" start() { echo -n "Starting $NAME... " if [ -f $PIDFILE ]; then echo "is already running!" else cd $SOLR_DIR $JAVA $JAVA_OPTIONS 2> $LOG_FILE & sleep 2 echo `ps -ef | grep -v grep | grep java | awk '{print $2}'` > $PIDFILE echo "(Done)" fi return 0 } stop() { echo -n "Stopping $NAME... " if [ -f $PIDFILE ]; then cd $SOLR_DIR $JAVA $JAVA_OPTIONS --stop sleep 2 rm $PIDFILE echo "(Done)" else echo "can not stop, it is not running!" fi return 0 } case "$1" in start) start ;; stop) stop ;; restart) stop sleep 5 start ;; *) echo "Usage: $0 (start | stop | restart)" exit 1 ;; esac Whenever I do solr -start I get this error: "Error occurred during initialization of VM Could not reserve enough space for object heap" I think this is because of the file above... Also here is where I have solr installed: var/www/solr and here is the start.jar file located: var/www/start.jar Help me out if you know whats causing this. Thanks BTW: OS is ubuntu 9.10

    Read the article

  • Cannot Start Passenger 3.0.18 Using Mountain Lion (OS X Server) and RVM

    - by LightBe Corp
    I recently did a clean install of Mountain Lion on my Mac Mini Server. I installed version 3.0.18 using a gem according to the directions on http://www.phusionpassenger.com with no errors that I could see. rvmsudo gem install passenger-enterprise-server-3.0.18.gem rvmsudo passenger-install-apache2-module Here are my entries in /etc/apache2/httpd.conf with my username masked: LoadModule passenger_module /Users/username/.rvm/gems/ruby-1.9.3-p327/gems/passenger-enterprise-server-3.0.18/ext/apache2/mod_passenger.so PassengerRoot /Users/username/.rvm/gems/ruby-1.9.3-p327/gems/passenger-enterprise-server-3.0.18 PassengerRuby /Users/username/.rvm/wrappers/ruby-1.9.3-p327/ruby I uncommented out the following statement: Include /private/etc/apache2/extra/httpd-vhosts.conf Here is a sample virtual host entry. I have three of them in the file. <VirtualHost *:80> ServerName www.mydomain.com ServerAlias mydomain.com PassengerAppRoot /Users/username/Sites/myfolder/ DocumentRoot /Users/username/Sites/myfolder/public <Directory /Users/username/Sites/myfolder/public> Allow from all AllowOverride all Options -MultiViews </Directory> </VirtualHost> I have restarted Apache several times. Here is information from my server: [~]$ ps -ef | grep Passenger 501 18804 303 0 12:39PM ttys000 0:00.00 grep Passenger [~]$ rvmsudo passenger-status Password: **ERROR: Phusion Passenger doesn't seem to be running.** [~]$ rvmsudo passenger-config --version 3.0.18 I have tried doing online searches on this. I was surprised that there was not all that much on this specific error even though from my understanding Passenger has been around for a few years. I have posted this issue on the Phusion Passenger Google Groups but have not heard anything. Any help would be appreciated, the sooner the better LOL. Seriously I need to have one of my three websites up by tomorrow evening. This is the only issue stopping that from happening. Thanks again.

    Read the article

  • linux container bridge filters ARP reply

    - by Dani Camps
    I am using kernel 3.0, and I have configured a linux container that is bridged to a tap interface in my host computer. This is the bridge configuration: :~$ brctl show bridge-1 bridge name bridge id STP enabled interfaces bridge-1 8000.9249c78a510b no ns3-mesh-tap-1 vethjUErij My problem is that this bridge is dropping ARP replies that come from the ns3-mesh-tap-1 interface. Instead, if I statically populate the ARP tables and ping directly everything works, so it has to be something related to ARP. I have read about similar problems in related posts, and I have tried with the solutions explained therein but nothing seems to work. Specifically: ~$ grep net.bridge /etc/sysctl.conf net.bridge.bridge-nf-call-arptables = 0 net.bridge.bridge-nf-call-iptables = 0 net.bridge.bridge-nf-call-ip6tables = 0 net.bridge.bridge-nf-filter-vlan-tagged = 0 net.bridge.bridge-nf-filter-pppoe-tagged = 0 arptables and ebtables are not installed. iptables FORWARD is all set to accept: Chain FORWARD (policy ACCEPT) target prot opt source destination The bridged interfaces are set to PROMISC: ~$ ifconfig ns3-mesh-tap-1 Link encap:Ethernet HWaddr 1a:c7:24:ef:36:1a ... UP BROADCAST PROMISC MULTICAST MTU:1500 Metric:1 vethjUErij Link encap:Ethernet HWaddr aa:b0:d1:3b:9a:0a .... UP BROADCAST RUNNING PROMISC MULTICAST MTU:1500 Metric:1 The macs learned by the bridge are correct (checked with brctl showmacs). Any insight on what I am doing wrong would be greatly appreciated. Best Regards Daniel

    Read the article

  • Looking for way to log process terminations on OS X (Mac)

    - by Stan Sieler
    I'm looking for a way to log all process terminations on my Mac (OS X 10.6.8). (And see pid, timestamp, process name) I've implemented something similar for HP-UX, but it required a kernel-level driver and intercepting several variations of "exit()" (the normal one, and the one invoked on behalf of a process while it's aborting). Why do I want the info? I've been seeing messages in my system log file (dmesg) like: CODE SIGNING: cs_invalid_page(0x1000): p=91550[GoogleSoftwareUp] clearing CS_VALID CODE SIGNING: cs_invalid_page(0x1000): p=92088[GoogleSoftwareUp] clearing CS_VALID Although dmesg lacks timestamps, apps/Utilities/Console : Database : all : search for CS_VALID shows that the messages appears about once every 58 1/2 minutes. I suspect the number after "p=" is a process id (pid) ... but for a process that has long since terminated by the time I see the message. So, if there was a process termination log mechanism that recorded the pid, the time of termination, the reason for termination, and the process name (at time of termination), that would probably allow me to determine who's causing those errors to be logged! (No, I'm not running Chrome on my Mac, and "ps -ef | grep -i goog" gets no hits either ... I'm not consciously running any Google apps on the Mac) thanks, Stan [email protected]

    Read the article

  • Determine from where is "sh" being run under apache www-data user using using PF or NETSTAT

    - by Eugene van der Merwe
    I am working with a compromised Ubuntu 8.04 Plesk 9.5.4 server. It seems that a script on the server is continuously doing reverse lookups to random IPs on the Internet. I first spotted it during by using top and then noticed flashes of this coming up continuously: sh -c host -W 1 '198.204.241.10' I wrote a this script to interrogate ps every 1 second to see how frequently this script happens: #!/bin/bash while : do ps -ef | egrep -i "sh -c host" sleep 1 done The results are that this script runs often, every few seconds: www-data 17762 8332 1 10:07 ? 00:00:00 sh -c host -W 1 '59.58.139.134' www-data 17772 8332 1 10:07 ? 00:00:00 sh -c host -W 1 '59.58.139.134' www-data 17879 17869 0 10:07 ? 00:00:00 sh -c host -W 1 '198.204.241.10' www-data 17879 17869 1 10:07 ? 00:00:00 sh -c host -W 1 '198.204.241.10' www-data 17879 17869 0 10:07 ? 00:00:00 sh -c host -W 1 '198.204.241.10' root 18031 17756 0 10:07 pts/2 00:00:00 egrep -i sh -c host www-data 18078 16704 0 10:07 ? 00:00:00 sh -c host -W 1 '59.58.139.134' www-data 18125 17996 0 10:07 ? 00:00:00 sh -c host -W 1 '91.124.51.65' root 18131 17756 0 10:07 pts/2 00:00:00 egrep -i sh -c host www-data 18137 17869 0 10:07 ? 00:00:00 sh -c host -W 1 '198.204.241.10' www-data 18137 17869 1 10:07 ? 00:00:00 sh -c host -W 1 '198.204.241.10' My theory is if I can see who is launching the sh process or form where it's launched I can isolate the problem further. Can somebody please guide me using netstat or ps to identify from where sh is being run? I might get many suggestions that the OS is out of date and so the Plesk, but please bear in mind there are some very concrete reasons why this server is running legacy software. My question is aimed at a advanced Linux systems administrators who have in depth experience with security compromises and using netstat and ps to get to the bottom of it.

    Read the article

  • apache chokes after 300 connections

    - by john titus
    We have an apache webserver in front of Tomcat hosted on EC2, instance type is extra large with 34GB memory. Our application deals with lot of external webservices and we have a very lousy external webservice which takes almost 300 seconds to respond to requests during peak hours. During peak hours the server chokes at just about 300 httpd processes. ps -ef | grep httpd | wc -l =300 I have googled and found numerous suggestions but nothing seems to work.. following are some configuration i have done which are directly taken from online resources. I have increased the limits of max connection and max clients in both apache and tomcat. here are the configuration details: //apache <IfModule prefork.c> StartServers 100 MinSpareServers 10 MaxSpareServers 10 ServerLimit 50000 MaxClients 50000 MaxRequestsPerChild 2000 </IfModule> //tomcat <Connector port="8080" protocol="org.apache.coyote.http11.Http11NioProtocol" connectionTimeout="600000" redirectPort="8443" enableLookups="false" maxThreads="1500" compressableMimeType="text/html,text/xml,text/plain,text/css,application/x-javascript,text/vnd.wap.wml,text/vnd.wap.wmlscript,application/xhtml+xml,application/xml-dtd,application/xslt+xml" compression="on"/> //Sysctl.conf net.ipv4.tcp_tw_reuse=1 net.ipv4.tcp_tw_recycle=1 fs.file-max = 5049800 vm.min_free_kbytes = 204800 vm.page-cluster = 20 vm.swappiness = 90 net.ipv4.tcp_rfc1337=1 net.ipv4.tcp_max_orphans = 65536 net.ipv4.ip_local_port_range = 5000 65000 net.core.somaxconn = 1024 I have been trying numerous suggestions but in vain.. how to fix this? I'm sure m2xlarge server should serve more requests than 300, probably i might be going wrong with my configuration.. The server chokes only during peak hours and when there are 300 concurrent requests waiting for the [300 second delayed] webservice to respond. Please help..

    Read the article

  • Server load increases by lot of httpd request with same PID

    - by user3740955
    I can see that my server load increases to more than 200-300 range. Before 1 week the maximum load was around 20-25. In top and ps -ef i can see a lot of httpd threads and the PPID of most of the httpd request are of the same PID. When i verified this the parent process ID is of root. Please let me know how i can reduce the server load. I have searched a lot for this but not able to find out a proper solution for this. Please let me know. Please see below a part of the top output. apache 29698 2062 1 16:54 ? 00:00:00 /usr/sbin/httpd apache 29700 2062 3 16:54 ? 00:00:00 /usr/sbin/httpd apache 29701 2062 10 16:54 ? 00:00:02 /usr/sbin/httpd apache 29702 2062 0 16:54 ? 00:00:00 /usr/sbin/httpd apache 29703 2062 1 16:54 ? 00:00:00 /usr/sbin/httpd apache 29705 2062 0 16:54 ? 00:00:00 /usr/sbin/httpd apache 29706 2062 3 16:54 ? 00:00:00 /usr/sbin/httpd apache 29707 2062 0 16:54 ? 00:00:00 /usr/sbin/httpd apache 29708 2062 1 16:54 ? 00:00:00 /usr/sbin/httpd apache 29709 2062 0 16:54 ? 00:00:00 /usr/sbin/httpd apache 29710 2062 0 16:54 ? 00:00:00 /usr/sbin/httpd apache 29711 2062 0 16:54 ? 00:00:00 /usr/sbin/httpd apache 29712 2062 0 16:54 ? 00:00:00 /usr/sbin/httpd Server version: Apache/2.2.3

    Read the article

  • Localization with ASP.NET MVC ModelMetadata

    - by kazimanzurrashid
    When using the DisplayFor/EditorFor there has been built-in support in ASP.NET MVC to show localized validation messages, but no support to show the associate label in localized text, unless you are using the .NET 4.0 with Mvc Future. Lets a say you are creating a create form for Product where you have support both English and German like the following. English German I have recently added few helpers for localization in the MvcExtensions, lets see how we can use it to localize the form. As mentioned in the past that I am not a big fan when it comes to decorate class with attributes which is the recommended way in ASP.NET MVC. Instead, we will use the fluent configuration (Similar to FluentNHibernate or EF CodeFirst) of MvcExtensions to configure our View Models. For example for the above we will using: public class ProductEditModelConfiguration : ModelMetadataConfiguration<ProductEditModel> { public ProductEditModelConfiguration() { Configure(model => model.Id).Hide(); Configure(model => model.Name).DisplayName(() => LocalizedTexts.Name) .Required(() => LocalizedTexts.NameCannotBeBlank) .MaximumLength(64, () => LocalizedTexts.NameCannotBeMoreThanSixtyFourCharacters); Configure(model => model.Category).DisplayName(() => LocalizedTexts.Category) .Required(() => LocalizedTexts.CategoryMustBeSelected) .AsDropDownList("categories", () => LocalizedTexts.SelectCategory); Configure(model => model.Supplier).DisplayName(() => LocalizedTexts.Supplier) .Required(() => LocalizedTexts.SupplierMustBeSelected) .AsListBox("suppliers"); Configure(model => model.Price).DisplayName(() => LocalizedTexts.Price) .FormatAsCurrency() .Required(() => LocalizedTexts.PriceCannotBeBlank) .Range(10.00m, 1000.00m, () => LocalizedTexts.PriceMustBeBetweenTenToThousand); } } As you can we are using Func<string> to set the localized text, this is just an overload with the regular string method. There are few more methods in the ModelMetadata which accepts this Func<string> where localization can applied like Description, Watermark, ShortDisplayName etc. The LocalizedTexts is just a regular resource, we have both English and German:   Now lets see the view markup: <%@ Page Language="C#" MasterPageFile="~/Views/Shared/Site.Master" Inherits="System.Web.Mvc.ViewPage<Demo.Web.ProductEditModel>" %> <asp:Content ID="Content1" ContentPlaceHolderID="TitleContent" runat="server"> <%= LocalizedTexts.Create %> </asp:Content> <asp:Content ID="Content2" ContentPlaceHolderID="MainContent" runat="server"> <h2><%= LocalizedTexts.Create %></h2> <%= Html.ValidationSummary(false, LocalizedTexts.CreateValidationSummary)%> <% Html.EnableClientValidation(); %> <% using (Html.BeginForm()) {%> <fieldset> <%= Html.EditorForModel() %> <p> <input type="submit" value="<%= LocalizedTexts.Create %>" /> </p> </fieldset> <% } %> <div> <%= Html.ActionLink(LocalizedTexts.BackToList, "Index")%> </div> </asp:Content> As we can see that we are using the same LocalizedTexts for the other parts of the view which is not included in the ModelMetadata like the Page title, button text etc. We are also using EditorForModel instead of EditorFor for individual field and both are supported. One of the added benefit of the fluent syntax based configuration is that we will get full compile type checking for our resource as we are not depending upon the string based resource name like the ASP.NET MVC. You will find the complete localized CRUD example in the MvcExtensions sample folder. That’s it for today.

    Read the article

  • Extracting the Date from a DateTime in Entity Framework 4 and LINQ

    - by Ken Cox [MVP]
    In my current ASP.NET 4 project, I’m displaying dates in a GridDateTimeColumn of Telerik’s ASP.NET Radgrid control. I don’t care about the time stuff, so my DataFormatString shows only the date bits: <telerik:GridDateTimeColumn FilterControlWidth="100px"   DataField="DateCreated" HeaderText="Created"    SortExpression="DateCreated" ReadOnly="True"    UniqueName="DateCreated" PickerType="DatePicker"    DataFormatString="{0:dd MMM yy}"> My problem was that I couldn’t get the built-in column filtering (it uses Telerik’s DatePicker control) to behave.  The DatePicker assumes that the time is 00:00:00 but the data would have times like 09:22:21. So, when you select a date and apply the EqualTo filter, you get no results. You would get results if all the time portions were 00:00:00. In essence, I wanted my Entity Framework query to give the DatePicker what it wanted… a Date without the Time portion. Fortunately, EF4 provides the TruncateTime  function. After you include Imports System.Data.Objects.EntityFunctions You’ll find that your EF queries will accept the TruncateTime function. Here’s my routine: Protected Sub RadGrid1_NeedDataSource _     (ByVal source As Object, _      ByVal e As Telerik.Web.UI.GridNeedDataSourceEventArgs) _     Handles RadGrid1.NeedDataSource     Dim ent As New OfficeBookDBEntities1     Dim TopBOMs = From t In ent.TopBom, i In ent.Items _                   Where t.BusActivityID = busActivityID _       And i.BusActivityID And t.ItemID = i.RecordID _       Order By t.DateUpdated Descending _       Select New With {.TopBomID = t.TopBomID, .ItemID = t.ItemID, _                        .PartNumber = i.PartNumber, _                        .Description = i.Description, .Notes = t.Notes, _                        .DateCreated = TruncateTime(t.DateCreated), _                        .DateUpdated = TruncateTime(t.DateUpdated)}     RadGrid1.DataSource = TopBOMs End Sub Now when I select March 14, 2011 on the DatePicker, the filter doesn’t stumble on time values that don’t make sense. Full Disclosure: Telerik gives me (and other developer MVPs) free copies of their suite.

    Read the article

  • Configuring Multiple Instances of MySQL in Solaris 11

    - by rajeshr
    Recently someone asked me for steps to configure multiple instances of MySQL database in an Operating Platform. Coz of my familiarity with Solaris OE, I prepared some notes on configuring multiple instances of MySQL database on Solaris 11. Maybe it's useful for some: If you want to run Solaris Operating System (or any other OS of your choice) as a virtualized instance in desktop, consider using Virtual Box. To download Solaris Operating System, click here. Once you have your Solaris Operating System (Version 11) up and running and have Internet connectivity to gain access to the Image Packaging System (IPS), please follow the steps as mentioned below to install MySQL and configure multiple instances: 1. Install MySQL Database in Solaris 11 $ sudo pkg install mysql-51 2. Verify if the mysql is installed: $ svcs -a | grep mysql Note: Service FMRI will look similar to the one here: svc:/application/database/mysql:version_51 3. Prepare data file system for MySQL Instance 1 zfs create rpool/mysql zfs create rpool/mysql/data zfs set mountpoint=/mysql/data rpool/mysql/data 4. Prepare data file system for MySQL Instance 2 zfs create rpool/mysql/data2 zfs set mountpoint=/mysql/data rpool/mysql/data2 5. Change the mysql/datadir of the MySQL Service (SMF) to point to /mysql/data $ svcprop mysql:version_51 | grep mysql/data $ svccfg -s mysql:version_51 setprop mysql/data=/mysql/data 6. Create a new instance of MySQL 5.1 (a) Copy the manifest of the default instance to temporary directory: $ sudo cp /lib/svc/manifest/application/database/mysql_51.xml /var/tmp/mysql_51_2.xml (b) Make appropriate modifications on the XML file $ sudo vi /var/tmp/mysql_51_2.xml - Change the "instance name" section to a new value "version_51_2" - Change the value of property name "data" to point to the ZFS file system "/mysql/data2" 7. Import the manifest to the SMF repository: $ sudo svccfg import /var/tmp/mysql_51_2.xml 8. Before starting the service, copy the file /etc/mysql/my.cnf to the data directories /mysql/data & /mysql/data2. $ sudo cp /etc/mysql/my.cnf /mysql/data/ $ sudo cp /etc/mysql/my.cnf /mysql/data2/ 9. Make modifications to the my.cnf in each of the data directories as required: $ sudo vi /mysql/data/my.cnf Under the [client] section port=3306 socket=/tmp/mysql.sock ---- ---- Under the [mysqld] section port=3306 socket=/tmp/mysql.sock datadir=/mysql/data ----- ----- server-id=1 $ sudo vi /mysql/data2/my.cnf Under the [client] section port=3307 socket=/tmp/mysql2.sock ----- ----- Under the [mysqld] section port=3307 socket=/tmp/mysql2.sock datadir=/mysql/data2 ----- ----- server-id=2 10. Make appropriate modification to the startup script of MySQL (managed by SMF) to point to the appropriate my.cnf for each instance: $ sudo vi /lib/svc/method/mysql_51 Note: Search for all occurences of mysqld_safe command and modify it to include the --defaults-file option. An example entry would look as follows: ${MySQLBIN}/mysqld_safe --defaults-file=${MYSQLDATA}/my.cnf --user=mysql --datadir=${MYSQLDATA} --pid=file=${PIDFILE} 11. Start the service: $ sudo svcadm enable mysql:version_51_2 $ sudo svcadm enable mysql:version_51 12. Verify that the two services are running by using: $ svcs mysql 13. Verify the processes: $ ps -ef | grep mysqld 14. Connect to each mysqld instance and verify: $ mysql --defaults-file=/mysql/data/my.cnf -u root -p $ mysql --defaults-file=/mysql/data2/my.cnf -u root -p Some references for Solaris 11 newbies Taking your first steps with Solaris 11 Introducing the basics of Image Packaging System Service Management Facility How To Guide For a detailed list of official educational modules available on Solaris 11, please visit here For MySQL courses from Oracle University access this page.

    Read the article

  • Entity Framework 4, WCF &amp; Lazy Loading Tip

    - by Dane Morgridge
    If you are doing any work with Entity Framework and custom WCF services in EFv1, everything works great.  As soon as you jump to EFv4, you may find yourself getting odd errors that you can’t seem to catch.  The problem is almost always has something to do with the new lazy loading feature in Entity Framework 4.  With Entity Framework 1, you didn’t have lazy loading so this problem didn’t surface.  Assume I have a Person entity and an Address entity where there is a one-to-many relationship between Person and Address (Person has many Addresses). In Entity Framework 1 (or in EFv4 with lazy loading turned off), I would have to load the Address data by hand by either using the Include or Load Method: var people = context.People.Include("Addresses"); or people.Addresses.Load(); Lazy loading works when the first time the Person.Addresses collection is accessed: 1: var people = context.People.ToList(); 2:  3: // only person data is currently in memory 4:  5: foreach(var person in people) 6: { 7: // EF determines that no Address data has been loaded and lazy loads 8: int count = person.Addresses.Count(); 9: } 10:  Lazy loading has the useful (and sometimes not useful) feature of fetching data when requested.  It can make your life easier or it can make it a big pain.  So what does this have to do with WCF?  One word: Serialization. When you need to pass data over the wire with WCF, the data contract is serialized into either XML or binary depending on the binding you are using.  Well, if I am using lazy loading, the Person entity gets serialized and during that process, the Addresses collection is accessed.  When that happens, the Address data is lazy loaded.  Then the Address is serialized, and the Person property is accessed, and then also serialized and then the Addresses collection is accessed.  Now the second time through, lazy loading doesn’t kick in, but you can see the infinite loop caused by this process.  This is a problem with any serialization, but I personally found it trying to use WCF. The fix for this is to simply turn off lazy Loading.  This can be done at each call by using context options: context.ContextOptions.LazyLoadingEnabled = false; Turning lazy loading off will now allow your classes to be serialized properly.  Note, this is if you are using the standard Entity Framework classes.  If you are using POCO,  you will have to do something slightly different.  With POCO, the Entity Framework will create proxy classes by default that allow things like lazy loading to work with POCO.  This proxy basically creates a proxy object that is a full Entity Framework object that sits between the context and the POCO object.  When using POCO with WCF (or any serialization) just turning off lazy loading doesn’t cut it.  You have to turn off the proxy creation to ensure that your classes will serialize properly: context.ContextOptions.ProxyCreationEnabled = false; The nice thing is that you can do this on a call-by-call basis.  If you use a new context for each set of operations (which you should) then you can turn either lazy loading or proxy creation on and off as needed.

    Read the article

  • Silverlight Cream for January 15, 2011 -- #1028

    - by Dave Campbell
    Note to #1024 Swag Winners: I'm sending emails to the vendors Sunday night, thanks for your patience (a few of you have not contacted me yet) In this Issue: Ezequiel Jadib, Daniel Egan(-2-), Page Brooks, Jason Zander, Andrej Tozon, Marlon Grech, Jonathan van de Veen, Walt Ritscher, Jesse Liberty, Jeremy Likness, Sacha Barber, William E. Burrows, and WindowsPhoneGeek. Above the Fold: Silverlight: "Building a Radar Control in Silverlight - Part 1" Page Brooks WP7: "Tutorial: Dynamic Tile Push Notification for Windows Phone 7" Jason Zander Training: "WP7 Unleashed Session I–Hands on Labs" Daniel Egan From SilverlightCream.com: Silverlight Rough Cut Editor SP1 Released Ezequiel Jadib has an announcement about the Rough Cut Editor SP1 release, and he walks you through the content, installation and a bit of the initial use. WP7 Unleashed Session I–Hands on Labs Daniel Egan posted Part 1 of 3 of a new WP7 HOL ... video online and material to download... get 'em while they're hot! WP7 Saving to Media Library Daniel Egan has another post up as well on saving an image to the media library... not the update from Tim Heuer... all good info Building a Radar Control in Silverlight - Part 1 This freakin' cool post from Page Brooks is the first one of a series on building a 'Radar Control' in Silverlight ... seriously, go to the bottom and run the demo... I pretty much guarantee you'll take the next link which is download the code... don't forget to read the article too! Tutorial: Dynamic Tile Push Notification for Windows Phone 7 Jason Zander has a nice-looking tutorial up on dynamic tile notifications... good diagrams and discussion and plenty of code. Reactive.buffering.from event. Andrej Tozon is continuing his Reactive Extensions posts with this one on buffering: BufferWithTime and BufferWIthCount ... good stuff, good write-up, and the start of a WP7 game? MEFedMVVM with PRISM 4 Marlon Grech combines his MEFedMVVM with Prism 4, and says it was easy... check out the post and the code. Adventures while building a Silverlight Enterprise application part #40 Jonathan van de Veen has a discussion up about things you need to pay attention to as your project gets close to first deployment... lots of good information to think about Silverlight or not. Customize Windows 7 Preview pane for XAML files Walt Ritscher has a (very easy) XAML extension for Windows 7 that allows previewing of XAML files in an explorer window... as our UK friends say "Brilliant!" Entity Framework Code-First, oData & Windows Phone Client From the never-ending stream of posts that is Jesse Liberty comes this one on EF Code-First... so Jesse's describing Code-First and OData all wrapped up about a WP7 app Sterling Silverlight and Windows Phone 7 Database Triggers and Auto-Identity Sterling and Database Triggers sitting in a tree... woot for WP7 from Jeremy Likness... provides database solutions including Validation, Data-specific concerns such as 'last modified', and post-save processing ... all good, Jeremy! A Look At Fluent APIs Sacha Barber has a great post up that isn't necessarily Silverlight, but is it? ... we've been hearing a lot about Fluent APIs... read on to see what the buzz is. Windows Phone 7 - Part 3 - Final Application William E. Burrows has Part 3 of his WP7 tutorial series up... this one completing the Golf Handicap app by giving the user the ability to manage scores. User Control vs Custom Control in Silverlight for WP7 WindowsPhoneGeek has a great diagram and description-filled post up on User Controls and Custom Controls in WP7... good external links too. Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • What are some good questions (and good/bad answers) to ask at an interview to gauge the competency of the company/team?

    - by Wayne M
    I'm already familiar with the Joel Test, but it's been my experience that some of the questions there have the answers "massaged" to make the company seem better than it is. I've had several jobs in the past that, for instance, claimed they had a QA process and did unit testing, and what they really meant is "The programmers test the app, and test with the debugger and via trial-and-error."; they said they used SVN but they just lumped everything into one giant repository and had no concept of branching/merging or anything more complicated than updating and committing; said they can build in one step and what they really mean is it's "one step" to copy dozens of files by hand from the programmer's PC to the live server. How do you go about properly gauging a company's environment to make sure that it's a well-evolved company and not stuck on doing things a certain way because they've done it for years and they're ignorant of change? You can almost never ask to see their source code, so you're stuck trying to figure out if the interviewer's answer is accurate or BS to make the company seem good. Besides the Joel Test what are some other good questions to get the proper feel for a company, and more importantly what are some good and bad answers that could indicate a good or bad company? I mean something like (take at face value, please, it's all I could think of at short notice): Question: How does the software team apply the SOLID principles and Inversion of Control to their code? Good Answer: We adhere to SOLID wherever possible; we use TDD so it kind of forces us to write abstract, testable code. We use Ninject for our IoC container because it's fairly easy to configure - it was that or StructureMap but I find Ninject a bit more intuitive, and who doesn't like ninjas? You're not a pirate, are you? Bad Answer: Our code is pretty secure, yeah. And what's this Inversion of Control thing? I've never heard of it before. You see what I did there. The "good" answer uses facts to back it up and has a bit of "in crowd" humor; the bad answer shows complete ignorance of the question - not necessarily a bad thing if you are interviewing for a manger/director position, but a terrible answer and a huge red flag if you're interviewing as a developer and talking to a senior developer or manager! My biggest problem at the moment is being able to take a generic response and gauge whether it's the good or bad answer; more often than not it's the bad kind and I find myself frustrated almost from day one at the new job. I suppose I could name drop if I ask about specific things (e.g. "Do you write unit tests?" and if the answer is yes, ask if they use NUnit, MbUnit or something else; if they mention data access ask if they use a clean ORM like NHibernate or something more coupled like EF or Linq) but is there another way short of being resolute to actually call the interview on things (which will almost certainly result in not getting the job, but if they are skirting the question it's probably not a job I want).

    Read the article

  • Multitenancy in SQL Azure

    - by cibrax
    If you are building a SaaS application in Windows Azure that relies on SQL Azure, it’s probably that you will need to support multiple tenants at database level. This is short overview of the different approaches you can use for support that scenario, A different database per tenant A new database is created and assigned when a tenant is provisioned. Pros Complete isolation between tenants. All the data for a tenant lives in a database only he can access. Cons It’s not cost effective. SQL Azure databases are not cheap, and the minimum size for a database is 1GB.  You might be paying for storage that you don’t really use. A different connection pool is required per database. Updates must be replicated across all the databases You need multiple backup strategies across all the databases Multiple schemas in a database shared by all the tenants A single database is shared among all the tenants, but every tenant is assigned to a different schema and database user. Pros You only pay for a single database. Data is isolated at database level. If the credentials for one tenant is compromised, the rest of the data for the other tenants is not. Cons You need to replicate all the database objects in every schema, so the number of objects can increase indefinitely. Updates must be replicated across all the schemas. The connection pool for the database must maintain a different connection per tenant (or set of credentials) A different user is required per tenant, which is stored at server level. You have to backup that user independently. Centralizing the database access with store procedures in a database shared by all the tenants A single database is shared among all the tenants, but nobody can read the data directly from the tables. All the data operations are performed through store procedures that centralize the access to the tenant data. The store procedures contain some logic to map the database user to an specific tenant. Pros You only pay for a single database. You only have a set of objects to maintain and backup. Cons There is no real isolation. All the data for the different tenants is shared in the same tables. You can not use traditional ORM like EF code first for consuming the data. A different user is required per tenant, which is stored at server level. You have to backup that user independently. SQL Federations A single database is shared among all the tenants, but a different federation is used per tenant. A federation in few words, it’s a mechanism for horizontal scaling in SQL Azure, which basically uses the idea of logical partitions to distribute data based on certain criteria. Pros You only have a single database with multiple federations. You can use filtering in the connections to pick the right federation, so any ORM could be used to consume the data. Cons There is no real isolation at that database level. The isolation is enforced programmatically with federations.

    Read the article

  • RIA Services EntitySet does not support 'Edit' opperation

    - by Savvas Sopiadis
    Hello everbody! Making my first steps in RIA Services (VS2010Beta2) and i encountered this problem: created an EF Model (no POCOs), generic repository on top of it and a RIA Service(hosted in an ASP.NET MVC application) and tryed to get data from within the ASP.NET MVC application: worked well. Next step: Silverlight client. Got a reference to the RIAService (through its context), queried for all the records of the repository and got them into the SL application as well (using this code sample): private ObservableCollection<Culture> _cultures = new ObservableCollection<Culture>(); public ObservableCollection<Culture> cultures { get { return _cultures; } set { _cultures = value; RaisePropertyChanged("cultures"); } } .... //Get cultures EntityQuery<Culture> queryCultures = from cu in dsCtxt.GetAllCulturesQuery() select cu; loCultures = dsCtxt.Load(queryCultures); loCultures.Completed += new EventHandler(lo_Completed); .... void loAnyCulture_Completed(object sender, EventArgs e) { ObservableCollection<Culture> temp= new ObservableCollection<Culture>loAnyCulture.Entities); AnyCulture = temp[0]; } The problem is this: whenever i try to edit some data of a record (in this example the first record) i get this error: This EntitySet of type 'Culture' does not support the 'Edit' operation. I thought that i did something weird and tryed to create an object of type Culture and assign a value to it: it worked well! What am i missing? Do i have to declare an EntitySet? Do i have to mark it? Do i have to...what? Thanks in advance

    Read the article

  • Forbidden Patterns Check-In Policy in TFS 2010

    - by Jaxidian
    I've been trying to use the Forbidden Patterns part of the TFS 2010 Power Tools and I'm just not understanding something - I simply cannot get anything to change as I try to use this! I'm using the version that was released recently (I believe April 23, 2010), so it's not an old version. First off, yes, I know it's regex based, so let's clear that doubt... I have tried to block the following scenarios: 1) I have modified all of my T4 EF templates to generate files named EntityName.gen.cs. I then attempted to prevent TFS from wanting to check those files in. I used the regular expression \.gen\.cs\z and it didn't change a single thing! I even tried it without the \z and nadda! 2) I don't want app.config and web.config files to be checked-in by default because we have these things stored into app.config.base and web.config.base files that our build scripts use to generate our per-environment app.config and web.config files. As such, I tried the following regexes and again, nothing worked! web\.config\z, app\.config\z, web\.release\.config\z and web\.debug\.config\z. What is it that I am screwing up with this?

    Read the article

  • Entity Framework with MySQL - Timeout Expired while Generating Model

    - by Nathan Taylor
    I've constructed a database in MySQL and I am attempting to map it out with Entity Framework, but I start running into "GenerateSSDLException"s whenever I try to add more than about 20 tables to the EF context. An exception of type 'Microsoft.Data.Entity.Design.VisualStudio.ModelWizard.Engine.ModelBuilderEngine+GenerateSSDLException' occurred while attempting to update from the database. The exception message is: 'An error occurred while executing the command definition. See the inner exception for details.' Fatal error encountered during command execution. Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding. There's nothing special about the affected tables, and it's never the same table(s), it's just that after a certain (unspecific) number of tables have been added, the context can no longer be updated without the "Timeout expired" error. Sometimes it's only one table left over, and sometimes it's three; results are pretty unpredictable. Furthermore, the variance in the number of tables which can be added before the error indicates to me that perhaps the problem lies in the size of the query being generated to update the context which includes both the existing table definitions, and also the new tables that are being added to it. Essentially, the SQL query is getting too large and it's failing to execute for some reason. If I generate the model with EdmGen2 it works without any errors, but the generated EDMX file cannot be updated within Visual Studio without producing the aforementioned exception. In all likelihood the source of this problem lies in the tool within Visual Studio given that EdmGen2 works fine, but I'm hoping that perhaps others could offer some advice on how to approach this very unique issue, because it seems like I'm not the only person experiencing it. One suggestion a colleague offered was maintaining two separate EBMX files with some table crossover, but that seems like a pretty ugly fix in my opinion. I suppose this is what I get for trying to use "new technology". :(

    Read the article

  • Entity Framework + AutoMapper ( Entity to DTO and DTO to Entity )

    - by vbobruisk
    Hello. i got some problems using EF with AutoMapper. =/ for example : i got 2 related entities ( Customers and Orders ) and theyr DTO classes : class CustomerDTO { public string CustomerID {get;set;} public string CustomerName {get;set;} public IList< OrderDTO Orders {get;set;} } class OrderDTO { public string OrderID {get;set;} public string OrderDetails {get;set;} public CustomerDTO Customers {get;set;} } //when mapping Entity to DTO the code works Customers cust = getCustomer(id); Mapper.CreateMap< Customers, CustomerDTO (); Mapper.CreateMap< Orders, OrderDTO (); CustomerDTO custDTO = Mapper.Map(cust); //but when i try to map back from DTO to Entity it fails with AutoMapperMappingException. Mapper.Reset(); Mapper.CreateMap< CustomerDTO , Customers (); Mapper.CreateMap< OrderDTO , Orders (); Customers customerModel = Mapper.Map< CustomerDTO ,Customers (custDTO); // exception is thrown here Am i doeing something wrong ? Thanks in Advance !

    Read the article

  • Tracking changes in Entity Framework 4.0 using POCO Dynamic Proxies across multiple data contexts.

    - by Rob Packwood
    I started messing with EF 4.0 because I am curious about the POCO possibilities... I wanted to simulate disconnected web environment and wrote the following code to simulate this: Save a test object in the database. Retrieve the test object Dispose of the DataContext associated with the test object I used to retrieve it Update the test object Create a new data context and persist the changes on the test object that are automatically tracked within the DynamicProxy generated against my POCO object. The problem is that when I call dataContext.SaveChanges in the Test method above, the updates are not applied. The testStore entity shows a status of "Modified" when I check its EntityStateTracker, but it is no longer modified when I view it within the new dataContext's Stores property. I would have thought that calling the Attach method on the new dataContext would also bring the object's "Modified" state over, but that appears to not be the case. Is there something I am missing? I am definitely working with self-tracking POCOs using DynamicProxies. private static void SaveTestStore(string storeName = "TestStore") { using (var context = new DataContext()) { Store newStore = context.Stores.CreateObject(); newStore.Name = storeName; context.Stores.AddObject(newStore); context.SaveChanges(); } } private static Store GetStore(string storeName = "TestStore") { using (var context = new DataContext()) { return (from store in context.Stores where store.Name == storeName select store).SingleOrDefault(); } } [Test] public void Test_Store_Update_Using_Different_DataContext() { SaveTestStore(); Store testStore = GetStore(); testStore.Name = "Updated"; using (var dataContext = new DataContext()) { dataContext.Stores.Attach(testStore); dataContext.SaveChanges(SaveOptions.DetectChangesBeforeSave); } Store updatedStore = GetStore("Updated"); Assert.IsNotNull(updatedStore); }

    Read the article

< Previous Page | 24 25 26 27 28 29 30 31 32 33 34 35  | Next Page >