Search Results

Search found 10863 results on 435 pages for 'no refunds no returns'.

Page 78/435 | < Previous Page | 74 75 76 77 78 79 80 81 82 83 84 85  | Next Page >

  • Shoud a method that waits for a change of state be const?

    - by Space_C0wb0y
    In a multithreaded scenario, I have a method like this: bool WaitForChange( time_duration WaitTime ) const; This method waits either until the state of the object has changed and returns true, or until the timeout times out (how do you say that?) and returns false. My intuition is, that const is to protect against unwanted side-effects of the method itself, so this is fine. But then again, some user might think that the state of the could not have changed, since the method is declared const. Is that user stupid, or should I make the method non-const in order to avoid confusion?

    Read the article

  • Why does this Bash regex match return an Exit Status of "2"?

    - by PreservedMoose
    I'm writing a Bash script that needs to scan for the existence of non-ASCII characters in filenames. I'm using the POSIX bracket regex syntax to match the non-ASCII characters, but for some reason, when I test for the match in an if/then statement, the test always returns an Exit Status of 2, and never matches my test string. Here's the code in question: FILEREQ_SOURCEFILE="Filename–WithNonAScII-Charàcters-05sec_23.98.mov" REGEX_MATCH_NONASCII="[^[:ascii:]]" if [[ $FILEREQ_SOURCEFILE =~ $REGEX_MATCH_NONASCII ]]; then echo "Exit Status: $?" echo "Matched!" else echo "Exit Status: $?" echo "No Match" fi This code always returns: Exit Status: 2 No Match I've read and re-read the bash-hackers.org explanation of how regex matching works, as well as this previous question on SO regarding matching non-ASCII characters, but for the life of me, I can't get this to work. What am I missing here?

    Read the article

  • read after lseek always return 0

    - by Wins
    I'm having an issue reading byte in file after performing lseek long cur_position = lseek(fd, length * -1, SEEK_CUR ); // Rewind file reading by length bytes. printf("cur_position: %i\n", cur_position); int num_byte = read(fd, &byte, 1); printf("num_byte = %i\n", num_byte); With the initial variable length set to 34, the above code would produce cur_position 5 (so there are definitely at least 34 bytes after the lseek function returns), but the variable num_byte returned from function read always returns 0 even though there are still more bytes to read. Does anyone know the reason num_byte always return 0 or do I make mistake in the above code? Just for information, the above code was run on the following machine $ uname -srvpio Linux 3.2.0-24-generic #39-Ubuntu SMP Mon May 21 16:52:17 UTC 2012 x86_64 x86_64 GNU/Linux

    Read the article

  • How does getAltitude() of Android GPS Location Works

    - by Sebi
    HI I tried to implement a simple GPS tracker. Therefore is used lm = (LocationManager) getSystemService(Context.LOCATION_SERVICE); lm.requestLocationUpdates(LocationManager.GPS_PROVIDER, 1000, 0, this); Then i used the public void onLocationChanged(Location location) { method to read the altitude of my current location. But i dont really know what Location.getAltitude() returns. The document says it returns the altitude. But is this in meters? or feets? if i put the phone on the desk next to me, this value changes between 500 and -500?? How does this really work???

    Read the article

  • where clause in linq query

    - by masoud
    in the below code "transmittalno.TransID" always has value but "doctranstocon.Transid" sometimes has value and sometimes does not, so when "doctranstocon.Transid" has value I have no problem but when there is not any value, when compare in the where clause like :"transmittalno.TransID == doctranstocon.Transid". It returns error" Object reference not set to an instance of an object." TranstoCons = from doctranstocon in _DocTranstoCons where ( transmittalno.TransID == doctranstocon.Transid ) select doctranstocon.tblTranstoCon I like TranstoCons returns null when there is not any value for " doctranstocon.Transid " How I can handle this error in the below linq query: var query = from transmittalno in _Transmittals Select new TransmittaltoConPresentationModel { TransID = transmittalno.TransID, Transmittal = transmittalno.TRANSMITTAL, TranstoCons = from doctranstocon in _DocTranstoCons where ( transmittalno.TransID == doctranstocon.Transid ) select doctranstocon.tblTranstoCon };

    Read the article

  • What will or won't cause a thread to block (a question from a test)

    - by fingerprint211b
    I've had a test, and there was a question I lost some points on, because I wasn't able to answer it : Which of the following is NOT a condition which can cause a thread to block : Calling an objects's wait() method Waiting for an I/O operation Calling sleep() Calling yield() Calling join() As far as I know, all of these are blocking calls : wait() returns when an something calls notify(), blocks until then If the thread is WAITING for an I/O operation then it's obviously blocked sleep(), obviously, blocks until the time runs out, or something wakes up the thread yield() "cancels the rest of the thread's timeslice" (lacking a better term), and returns only when the thread is active again join() blocks until the thread it's waiting for terminates. Am I missing something here?

    Read the article

  • Why does `Array.length`, `Function.length`, `String.length`, etc return 1?

    - by zealoushacker
    While teaching my JavaScript class yesterday, my students and I came across some interesting functionality that I thought might be worth capturing in a question and the answer I've come to. Typing Array.length in the JS console in chrome returns 1. Likewise, Function.length returns 1. This is important because: Every function in JavaScript is actually a Function object. (MDN JS Ref: Function) Thus, Object.length and likely all other native objects will and should return 1 as the value of the length property. So, finally why is this behavior occurring?

    Read the article

  • Selecting only the entries that have a distinct combination of values?

    - by Theodore E O'Neal
    I have a table, links1, that has the columns headers CardID and AbilityID, that looks like this: CardID | AbilityID 1001 | 1 1001 | 2 1001 | 3 1002 | 2 1002 | 3 1002 | 4 1003 | 3 1003 | 4 1003 | 5 What I want is to be able to return all the CardID that that have two specific AbilityID. For example: If I choose 1 and 2, it returns 1001. If I choose 3 and 4, it returns 1002 and 1003. Is it possible to do this with only one table, or will I need to create an identical table and do an INNER JOIN on those?

    Read the article

  • Setting up RADIUS + LDAP for WPA2 on Ubuntu

    - by Morten Siebuhr
    I'm setting up a wireless network for ~150 users. In short, I'm looking for a guide to set RADIUS server to authenticate WPA2 against a LDAP. On Ubuntu. I got a working LDAP, but as it is not in production use, it can very easily be adapted to whatever changes this project may require. I've been looking at FreeRADIUS, but any RADIUS server will do. We got a separate physical network just for WiFi, so not too many worries about security on that front. Our AP's are HP's low end enterprise stuff - they seem to support whatever you can think of. All Ubuntu Server, baby! And the bad news: I now somebody less knowledgeable than me will eventually take over administration, so the setup has to be as "trivial" as possible. So far, our setup is based only on software from the Ubuntu repositories, with exception of our LDAP administration web application and a few small special scripts. So no "fetch package X, untar, ./configure"-things if avoidable. UPDATE 2009-08-18: While I found several useful resources, there is one serious obstacle: Ignoring EAP-Type/tls because we do not have OpenSSL support. Ignoring EAP-Type/ttls because we do not have OpenSSL support. Ignoring EAP-Type/peap because we do not have OpenSSL support. Basically the Ubuntu version of FreeRADIUS does not support SSL (bug 183840), which makes all the secure EAP-types useless. Bummer. But some useful documentation for anybody interested: http://vuksan.com/linux/dot1x/802-1x-LDAP.html http://tldp.org/HOWTO/html_single/8021X-HOWTO/#confradius UPDATE 2009-08-19: I ended up compiling my own FreeRADIUS package yesterday evening - there's a really good recipe at http://www.linuxinsight.com/building-debian-freeradius-package-with-eap-tls-ttls-peap-support.html (See the comments to the post for updated instructions). I got a certificate from http://CACert.org (you should probably get a "real" cert if possible) Then I followed the instructions at http://vuksan.com/linux/dot1x/802-1x-LDAP.html. This links to http://tldp.org/HOWTO/html_single/8021X-HOWTO/, which is a very worthwhile read if you want to know how WiFi security works. UPDATE 2009-08-27: After following the above guide, I've managed to get FreeRADIUS to talk to LDAP: I've created a test user in LDAP, with the password mr2Yx36M - this gives an LDAP entry roughly of: uid: testuser sambaLMPassword: CF3D6F8A92967E0FE72C57EF50F76A05 sambaNTPassword: DA44187ECA97B7C14A22F29F52BEBD90 userPassword: {SSHA}Z0SwaKO5tuGxgxtceRDjiDGFy6bRL6ja When using radtest, I can connect fine: > radtest testuser "mr2Yx36N" sbhr.dk 0 radius-private-password Sending Access-Request of id 215 to 130.225.235.6 port 1812 User-Name = "msiebuhr" User-Password = "mr2Yx36N" NAS-IP-Address = 127.0.1.1 NAS-Port = 0 rad_recv: Access-Accept packet from host 130.225.235.6 port 1812, id=215, length=20 > But when I try through the AP, it doesn't fly - while it does confirm that it figures out the NT and LM passwords: ... rlm_ldap: sambaNTPassword -> NT-Password == 0x4441343431383745434139374237433134413232463239463532424542443930 rlm_ldap: sambaLMPassword -> LM-Password == 0x4346334436463841393239363745304645373243353745463530463736413035 [ldap] looking for reply items in directory... WARNING: No "known good" password was found in LDAP. Are you sure that the user is configured correctly? [ldap] user testuser authorized to use remote access rlm_ldap: ldap_release_conn: Release Id: 0 ++[ldap] returns ok ++[expiration] returns noop ++[logintime] returns noop [pap] Normalizing NT-Password from hex encoding [pap] Normalizing LM-Password from hex encoding ... It is clear that the NT and LM passwords differ from the above, yet the message [ldap] user testuser authorized to use remote access - and the user is later rejected...

    Read the article

  • php-fpm + persistent sockets = 502 bad gateway

    - by leeoniya
    Put on your reading glasses - this will be a long-ish one. First, what I'm doing. I'm building a web-app interface for some particularly slow tcp devices. Opening a socket to them takes 200ms and an fwrite/fread cycle takes another 300ms. To reduce the need for both of these actions on each request, I'm opening a persistent tcp socket which reduces the response time by the aforementioned 200ms. I was hoping PHP-FPM would share the persistent connections between requests from different clients (and indeed it does!), but there are some issues which I havent been able to resolve after 2 days of interneting, reading logs and modifying settings. I have somewhat narrowed it down though. Setup: Ubuntu 13.04 x64 Server (fully updated) on Linode PHP 5.5.0-6~raring+1 (fpm-fcgi) nginx/1.5.2 Relevent config: nginx worker_processes 4; php-fpm/pool.d pm = dynamic pm.max_children = 2 pm.start_servers = 2 pm.min_spare_servers = 2 Let's go from coarse to fine detail of what happens. After a fresh start I have 4x nginx processes and 2x php5-fpm processes waiting to handle requests. Then I send requests every couple seconds to the script. The first take a while to open the socket connection and returns with the data in about 500ms, the second returns data in 300ms (yay it's re-using the socket), the third also succeeds in about 300ms, the fourth request = 502 Bad Gateway, same with the 5th. Sixth request once again returns data, except now it took 500ms again. The process repeats for several cycles after which every 4 requests result in 2x 502 Bad Gateways and 2x 500ms Data responses. If I double all the fpm pool values and have 4x php-fpm processes running, the cycles settles in with 4x successful 500ms responses followed by 4x Bad Gateway errors. If I don't use persistent sockets, this issue goes away but then every request is 500ms. What I suspect is happening is the persistent socket keeps each php-fpm process from idling and ties it up, so the next one gets chosen until none are left and as they error out, maybe they are restarted and become available on the next round-robin loop ut the socket dies with the process. I haven't yet checked the 'slowlog', but the nginx error log shows lots of this: *188 recv() failed (104: Connection reset by peer) while reading response header from upstream, client:... All the suggestions on the internet regarding fixing nginx/php-fpm/502 bad gateway relate to high load or fcgi_pass misconfiguration. This is not the case here. Increasing buffers/sizes, changing timeouts, switching from unix socket to tcp socket for fcgi_pass, upping connection limits on the system....none of this stuff applies here. I've had some other success with setting pm = ondemand rather than dynamic, but as soon as the initial fpm-process gets killed off after idling, the persistent socket is gone for all subsequent php-fpm spawns. For the php script, I'm using stream_socket_client() with a STREAM_CLIENT_PERSISTENT flag. A while/stream_select() loop to detect socket data and fread($sock, 4096) to grab the data. I don't call fclose() obviously. If anyone has some additional questions or advice on how to get a persistent socket without tying up the php-fpm processes beyond the request completion, or maybe some other things to try, I'd appreciate it. some useful links: Nginx + php-fpm - recv() error Nginx + php-fpm "504 Gateway Time-out" error with almost zero load (on a test-server) Nginx + PHP-FPM "error 104 Connection reset by peer" causes occasional duplicate posts http://www.linuxquestions.org/questions/programming-9/php-pfsockopen-552084/ http://stackoverflow.com/questions/14268018/concurrent-use-of-a-persistent-php-socket http://devzone.zend.com/303/extension-writing-part-i-introduction-to-php-and-zend/#Heading3 http://stackoverflow.com/questions/242316/how-to-keep-a-php-stream-socket-alive http://php.net/manual/en/install.fpm.configuration.php https://www.google.com/search?q=recv%28%29+failed+%28104:+Connection+reset+by+peer%29+while+reading+response+header+from+upstream+%22502%22&ei=mC1XUrm7F4WQyAHbv4H4AQ&start=10&sa=N&biw=1920&bih=953&dpr=1

    Read the article

  • Connection Timed Out - Simple outbound Postfix for PHP Contact form

    - by BLaZuRE
    Alright, so I only got Postfix for a PHP contact form that will send email to a single . I only want it to send out mail to a single external address ([email protected]). I have domain sub1.sub2.domain.com. I installed Postfix out of the Ubuntu repo, with minimal config changes. I cannot get Postfix to send mail externally (though it succeeds for internal accounts, which is unnecessary). The email simply defers if I generate an email using PHP mail(). If I try to form my own in telnet, right after rcpt to: [email][email protected][/email], I get a postfix/smtpd[31606]: NOQUEUE: reject: RCPT from localhost[127.0.0.1]: 550 5.1.1 <[email protected]>: Recipient address rejected: example.com; from=<root@localhost> to=<[email protected]> proto=ESMTP helo=<localhost> when commenting out default_transport = error and relay_transport = error lines, I get the following: Jun 26 14:33:00 sub1 postfix/smtp[12191]: 2DA06F88206A: to=<[email protected]>, relay=none, delay=514, delays=409/0.01/105/0, dsn=4.4.1, status=deferred (connect to aspmx3.googlemail.com[74.125.127.27]:25: Connection timed out) Jun 26 14:36:36 sub1 postfix/smtp[12225]: connect to mta7.am0.yahoodns.net[98.139.175.224]:25: Connection timed out Jun 26 14:38:00 sub1 postfix/smtp[12225]: 22952F88208E: to=<[email protected]>, relay=none, delay=655, delays=550/0.01/105/0, dsn=4.4.1, status=deferred (connect to mta5.am0.yahoodns.net[67.195.168.230]:25: Connection timed out) My main.cf # See /usr/share/postfix/main.cf.dist for a commented, more complete version # Debian specific: Specifying a file name will cause the first # line of that file to be used as the name. The Debian default # is /etc/mailname. #myorigin = /etc/mailname smtpd_banner = $myhostname ESMTP $mail_name (Ubuntu) biff = no # appending .domain is the MUA's job. append_dot_mydomain = no # Uncomment the next line to generate "delayed mail" warnings #delay_warning_time = 4h readme_directory = no # TLS parameters smtpd_tls_cert_file=/etc/ssl/certs/ssl-cert-snakeoil.pem smtpd_tls_key_file=/etc/ssl/private/ssl-cert-snakeoil.key smtpd_use_tls=yes smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache # See /usr/share/doc/postfix/TLS_README.gz in the postfix-doc package for # information on enabling SSL in the smtp client. myhostname = sub1.sub2.domain.com alias_maps = hash:/etc/aliases alias_database = hash:/etc/aliases myorigin = /etc/mailname mydestination = sub1.sub2.domain.com, localhost relayhost = mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 mailbox_size_limit = 0 recipient_delimiter = + inet_interfaces = all default_transport = error relay_transport = error Also, a dig sub1.sub2.domain.com MX returns: ; <<>> DiG 9.7.0-P1 <<>> sub1.sub2.domain.com MX ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 4853 ;; flags: qr aa rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 1, ADDITIONAL: 0 ;; QUESTION SECTION: ;sub1.sub2.domain.com. IN MX ;; AUTHORITY SECTION: sub2.domain.com. 600 IN SOA sub2.domain.com. sub5.domain.com. 2012062915 7200 600 1209600 600 ;; Query time: 0 msec ;; SERVER: x.x.x.x#53(x.x.x.x) ;; WHEN: Fri Jun 29 16:35:00 2012 ;; MSG SIZE rcvd: 84 lsof -i returns empty netstat -t -a | grep LISTEN returns tcp 0 0 localhost:mysql *:* LISTEN tcp 0 0 *:ftp *:* LISTEN tcp 0 0 *:ssh *:* LISTEN tcp 0 0 localhost:ipp *:* LISTEN tcp 0 0 *:smtp *:* LISTEN tcp6 0 0 [::]:netbios-ssn [::]:* LISTEN tcp6 0 0 [::]:www [::]:* LISTEN tcp6 0 0 [::]:ssh [::]:* LISTEN tcp6 0 0 localhost:ipp [::]:* LISTEN tcp6 0 0 [::]:microsoft-ds [::]:* LISTEN

    Read the article

  • how to troubleshoot using rsyslog to output to a mysql database

    - by ChrisNZ
    Using FreeBSD 8.0 32 bit. I have installed rsyslogd 5.5.5 with ommysql. (installed ports /usr/ports/sysutils/rsyslog55 and /usr/ports/sysutils/rsyslog55-mysql) My rsyslog.conf file looks like: $ModLoad imudp $ModLoad imtcp $ModLoad ommysql $ModLoad immark.so $ModLoad imuxsock.so $ModLoad imklog.so $OptimizeForUniprocessor on $AllowedSender UDP, 10.0.0.0/8 $UDPServerAddress 0.0.0.0 $UDPServerRun 514 $UDPServerTimeRequery 2 # +SG560 *.* :ommysql:127.0.0.1,Syslog,sysloguser,mypassword My command line flags for rsyslogd are: -c5 -4 Checking the code with -c5 -N1 returns no errors. I have confirmed that rsyslogd is working by changing the last line to say: *.* /var/log/snapgear.log which results in messages appearing in the snapgear.log file. So it is probably something to do with my MySQL setup If I do: mysql -u sysloguser -p Syslog Enter password: Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 56 Server version: 5.0.86 FreeBSD port: mysql-server-5.0.86 mysql> select * from SystemEvents; Empty set (0.00 sec) mysql> :-( I have confirmed that sysloguser has full privileges for the Syslog database. If I run rsyslogd on the console in debug mode: /usr/local/sbin/rsyslogd -f /usr/local/etc/rsyslog.conf -c5 -n -d I can see this sequence of events each time a message is received: 9244.376687256:28359280: main Q: entry added, size now log 1, phys 1 entries 9244.376705694:28359280: main Q: EnqueueMsg advised worker start 9244.376726647:28359280: Listening on UDP syslogd socket 4 (IPv4/port 514). 9244.376728602:28359280: --------imUDP calling select, active file descriptors (max 4): 4 9244.376890075:283593c0: wti 0x28306e80: worker awoke from idle processing 9244.376892031:283593c0: we deleted 0 objects and enqueued 0 objects 9244.376893986:283593c0: delete batch from store, new sizes: log 1, phys 1 9244.376895942:283593c0: msgConsumer processes msg 0/1 9244.376897898:283593c0: msg parser: flags 70, from '~NOTRESOLVED~', msg 'Jun 29 17:32:24 SG560 kernel: (20000629T1732244' 9244.376900132:283593c0: parse using parser list 0x283080e8 (the default list). 9244.376902088:283593c0: dropped LF at very end of message (DropTrailingLF is set) 9244.376904044:283593c0: Parser 'rsyslog.rfc5424' returned -2160 9244.376905999:283593c0: Message will now be parsed by the legacy syslog parser (one size fits all... ;)). 9244.376907955:283593c0: Parser 'rsyslog.rfc3164' returned 0 9244.376909910:283593c0: testing filter, f_pmask 255 9244.376911866:283593c0: Called action, logging to ommysql 9244.376918012:283593c0: actionTryResume: action state: susp, next retry (if applicable): 1277869250 [now 1277869244] 9244.376919967:283593c0: action call returned -2123 9244.376921923:283593c0: tryDoAction: unexpected error code -2123, finalizing 9244.376926113:283593c0: actionTryResume: action state: susp, next retry (if applicable): 1277869250 [now 1277869244] 9244.376928069:283593c0: ruleset: get iRet 0 from rule.ProcessMsg() 9244.376930024:283593c0: ruleset.ProcessMsg() returns 0 9244.376931980:283593c0: regular consumer finished, iret=0, szlog 0 sz phys 1 9244.376933936:283593c0: XXX: enqueueing data element 0 of 1 9244.376935891:283593c0: we deleted 1 objects and enqueued 0 objects 9244.376938126:283593c0: delete batch from store, new sizes: log 0, phys 0 9244.376940082:283593c0: regular consumer finished, iret=4, szlog 0 sz phys 0 9244.376942037:283593c0: main Q:Reg/w0: worker IDLE, waiting for work. .... I can see the Action Call to ommysql returns unexpected error code -2123 Now I am stuck! Any ideas on what to look for next? Perhaps I there are extra ports I need to install? I will be very grateful for any assistance here!

    Read the article

  • How to obtain a random sub-datatable from another data table

    - by developerit
    Introduction In this article, I’ll show how to get a random subset of data from a DataTable. This is useful when you already have queries that are filtered correctly but returns all the rows. Analysis I came across this situation when I wanted to display a random tag cloud. I already had the query to get the keywords ordered by number of clicks and I wanted to created a tag cloud. Tags that are the most popular should have more chance to get picked and should be displayed larger than less popular ones. Implementation In this code snippet, there is everything you need. ' Min size, in pixel for the tag Private Const MIN_FONT_SIZE As Integer = 9 ' Max size, in pixel for the tag Private Const MAX_FONT_SIZE As Integer = 14 ' Basic function that retreives Tags from a DataBase Public Shared Function GetTags() As MediasTagsDataTable ' Simple call to the TableAdapter, to get the Tags ordered by number of clicks Dim dt As MediasTagsDataTable = taMediasTags.GetDataValide ' If the query returned no result, return an empty DataTable If dt Is Nothing OrElse dt.Rows.Count < 1 Then Return New MediasTagsDataTable End If ' Set the font-size of the group of data ' We are dividing our results into sub set, according to their number of clicks ' Example: 10 results -> [0,2] will get font size 9, [3,5] will get font size 10, [6,8] wil get 11, ... ' This is the number of elements in one group Dim groupLenth As Integer = CType(Math.Floor(dt.Rows.Count / (MAX_FONT_SIZE - MIN_FONT_SIZE)), Integer) ' Counter of elements in the same group Dim counter As Integer = 0 ' Counter of groups Dim groupCounter As Integer = 0 ' Loop througt the list For Each row As MediasTagsRow In dt ' Set the font-size in a custom column row.c_FontSize = MIN_FONT_SIZE + groupCounter ' Increment the counter counter += 1 ' If the group counter is less than the counter If groupLenth <= counter Then ' Start a new group counter = 0 groupCounter += 1 End If Next ' Return the new DataTable with font-size Return dt End Function ' Function that generate the random sub set Public Shared Function GetRandomSampleTags(ByVal KeyCount As Integer) As MediasTagsDataTable ' Get the data Dim dt As MediasTagsDataTable = GetTags() ' Create a new DataTable that will contains the random set Dim rep As MediasTagsDataTable = New MediasTagsDataTable ' Count the number of row in the new DataTable Dim count As Integer = 0 ' Random number generator Dim rand As New Random() While count < KeyCount Randomize() ' Pick a random row Dim r As Integer = rand.Next(0, dt.Rows.Count - 1) Dim tmpRow As MediasTagsRow = dt(r) ' Import it into the new DataTable rep.ImportRow(tmpRow) ' Remove it from the old one, to be sure not to pick it again dt.Rows.RemoveAt(r) ' Increment the counter count += 1 End While ' Return the new sub set Return rep End Function Pro’s This method is good because it doesn’t require much work to get it work fast. It is a good concept when you are working with small tables, let says less than 100 records. Con’s If you have more than 100 records, out of memory exception may occur since we are coping and duplicating rows. I would consider using a stored procedure instead.

    Read the article

  • va_getDriverName() failed with unknown

    - by MIkCode
    i upgraded to 14.04 and afterwards i got an issue with cpu usage when playing HD. I checked the vainfo and this what i got back ibva info: VA-API version 0.35.0 libva info: va_getDriverName() returns -1 libva error: va_getDriverName() failed with unknown libva error,driver_name=(null) vaInitialize failed with error code -1 (unknown libva error),exit I re install the driver sudo apt-get --reinstall install i965-va-driver but i steel got the same error. Any thoghts

    Read the article

  • Pre-filtering and shaping OData feeds using WCF Data Services and the Entity Framework - Part 1

    - by rajbk
    The Open Data Protocol, referred to as OData, is a new data-sharing standard that breaks down silos and fosters an interoperative ecosystem for data consumers (clients) and producers (services) that is far more powerful than currently possible. It enables more applications to make sense of a broader set of data, and helps every data service and client add value to the whole ecosystem. WCF Data Services (previously known as ADO.NET Data Services), then, was the first Microsoft technology to support the Open Data Protocol in Visual Studio 2008 SP1. It provides developers with client libraries for .NET, Silverlight, AJAX, PHP and Java. Microsoft now also supports OData in SQL Server 2008 R2, Windows Azure Storage, Excel 2010 (through PowerPivot), and SharePoint 2010. Many other other applications in the works. * This post walks you through how to create an OData feed, define a shape for the data and pre-filter the data using Visual Studio 2010, WCF Data Services and the Entity Framework. A sample project is attached at the bottom of Part 2 of this post. Pre-filtering and shaping OData feeds using WCF Data Services and the Entity Framework - Part 2 Create the Web Application File –› New –› Project, Select “ASP.NET Empty Web Application” Add the Entity Data Model Right click on the Web Application in the Solution Explorer and select “Add New Item..” Select “ADO.NET Entity Data Model” under "Data”. Name the Model “Northwind” and click “Add”.   In the “Choose Model Contents”, select “Generate Model From Database” and click “Next”   Define a connection to your database containing the Northwind database in the next screen. We are going to expose the Products table through our OData feed. Select “Products” in the “Choose your Database Object” screen.   Click “Finish”. We are done creating our Entity Data Model. Save the Northwind.edmx file created. Add the WCF Data Service Right click on the Web Application in the Solution Explorer and select “Add New Item..” Select “WCF Data Service” from the list and call the service “DataService” (creative, huh?). Click “Add”.   Enable Access to the Data Service Open the DataService.svc.cs class. The class is well commented and instructs us on the next steps. public class DataService : DataService< /* TODO: put your data source class name here */ > { // This method is called only once to initialize service-wide policies. public static void InitializeService(DataServiceConfiguration config) { // TODO: set rules to indicate which entity sets and service operations are visible, updatable, etc. // Examples: // config.SetEntitySetAccessRule("MyEntityset", EntitySetRights.AllRead); // config.SetServiceOperationAccessRule("MyServiceOperation", ServiceOperationRights.All); config.DataServiceBehavior.MaxProtocolVersion = DataServiceProtocolVersion.V2; } } Replace the comment that starts with “/* TODO:” with “NorthwindEntities” (the entity container name of the Model we created earlier).  WCF Data Services is initially locked down by default, FTW! No data is exposed without you explicitly setting it. You have explicitly specify which Entity sets you wish to expose and what rights are allowed by using the SetEntitySetAccessRule. The SetServiceOperationAccessRule on the other hand sets rules for a specified operation. Let us define an access rule to expose the Products Entity we created earlier. We use the EnititySetRights.AllRead since we want to give read only access. Our modified code is shown below. public class DataService : DataService<NorthwindEntities> { public static void InitializeService(DataServiceConfiguration config) { config.SetEntitySetAccessRule("Products", EntitySetRights.AllRead); config.DataServiceBehavior.MaxProtocolVersion = DataServiceProtocolVersion.V2; } } We are done setting up our ODataFeed! Compile your project. Right click on DataService.svc and select “View in Browser” to see the OData feed. To view the feed in IE, you must make sure that "Feed Reading View" is turned off. You set this under Tools -› Internet Options -› Content tab.   If you navigate to “Products”, you should see the Products feed. Note also that URIs are case sensitive. ie. Products work but products doesn’t.   Filtering our data OData has a set of system query operations you can use to perform common operations against data exposed by the model. For example, to see only Products in CategoryID 2, we can use the following request: /DataService.svc/Products?$filter=CategoryID eq 2 At the time of this writing, supported operations are $orderby, $top, $skip, $filter, $expand, $format†, $select, $inlinecount. Pre-filtering our data using Query Interceptors The Product feed currently returns all Products. We want to change that so that it contains only Products that have not been discontinued. WCF introduces the concept of interceptors which allows us to inject custom validation/policy logic into the request/response pipeline of a WCF data service. We will use a QueryInterceptor to pre-filter the data so that it returns only Products that are not discontinued. To create a QueryInterceptor, write a method that returns an Expression<Func<T, bool>> and mark it with the QueryInterceptor attribute as shown below. [QueryInterceptor("Products")] public Expression<Func<Product, bool>> OnReadProducts() { return o => o.Discontinued == false; } Viewing the feed after compilation will only show products that have not been discontinued. We also confirm this by looking at the WHERE clause in the SQL generated by the entity framework. SELECT [Extent1].[ProductID] AS [ProductID], ... ... [Extent1].[Discontinued] AS [Discontinued] FROM [dbo].[Products] AS [Extent1] WHERE 0 = [Extent1].[Discontinued] Other examples of Query/Change interceptors can be seen here including an example to filter data based on the identity of the authenticated user. We are done pre-filtering our data. In the next part of this post, we will see how to shape our data. Pre-filtering and shaping OData feeds using WCF Data Services and the Entity Framework - Part 2 Foot Notes * http://msdn.microsoft.com/en-us/data/aa937697.aspx † $format did not work for me. The way to get a Json response is to include the following in the  request header “Accept: application/json, text/javascript, */*” when making the request. This is easily done with most JavaScript libraries.

    Read the article

  • Validating a linked item&rsquo;s data template in Sitecore

    - by Kyle Burns
    I’ve been doing quite a bit of work in Sitecore recently and last week I encountered a situation that it appears many others have hit.  I was working with a field that had been configured originally as a grouped droplink, but now needed to be updated to support additional levels of hierarchy in the folder structure.  If you’ve done any work in Sitecore that statement makes sense, but if not it may seem a bit cryptic.  Sitecore offers a number of different field types and a subset of these field types focus on providing links either to other items on the content tree or to content that is not stored in Sitecore.  In the case of the grouped droplink, the field is configured with a “root” folder and each direct descendant of this folder is considered to be a header for a grouping of other items and displayed in a dropdown.  A picture is worth a thousand words, so consider the following piece of a content tree: If I configure a grouped droplink field to use the “Current” folder as its datasource, the control that gets to my content author looks like this: This presents a nicely organized display and limits the user to selecting only the direct grandchildren of the folder root.  It also presents the limitation that struck as we were thinking through the content architecture and how it would hold up over time – the authors cannot further organize content under the root folder because of the structure required for the dropdown to work.  Over time, not allowing the hierarchy to go any deeper would prevent out authors from being able to organize their content in a way that it would be found when needed, so the grouped droplink data type was not going to fit the bill. I needed to look for an alternative data type that allowed for selection of a single item and limited my choices to descendants of a specific node on the content tree.  After looking at the options available for links in Sitecore and considering them against each other, one option stood out as nearly perfect – the droptree.  This field type stores its data identically to the droplink and allows for the selection of zero or one items under a specific node in the content tree.  By changing my data template to use droptree instead of grouped droplink, the author is now presented with the following when selecting a linked item: Sounds great, but a did say almost perfect – there’s still one flaw.  The code intended to display the linked item is expecting the selection to use a specific data template (or more precisely it makes certain assumptions about the fields that will be present), but the droptree does nothing to prevent the author from selecting a folder (since folders are items too) instead of one of the items contained within a folder.  I looked to see if anyone had already solved this problem.  I found many people discussing the problem, but the closest that I found to a solution was the statement “the best thing would probably be to create a custom validator” with no further discussion in regards to what this validator might look like.  I needed to create my own validator to ensure that the user had not selected a folder.  Since so many people had the same issue, I decided to make the validator as reusable as possible and share it here. The validator that I created inherits from StandardValidator.  In order to make the validator more intuitive to developers that are familiar with the TreeList controls in Sitecore, I chose to implement the following parameters: ExcludeTemplatesForSelection – serves as a “deny list”.  If the data template of the selected item is in this list it will not validate IncludeTemplatesForSelection – this can either be empty to indicate that any template not contained in the exclusion list is acceptable or it can contain the list of acceptable templates Now that I’ve explained the parameters and the purpose of the validator, I’ll let the code do the rest of the talking: 1: /// <summary> 2: /// Validates that a link field value meets template requirements 3: /// specified using the following parameters: 4: /// - ExcludeTemplatesForSelection: If present, the item being 5: /// based on an excluded template will cause validation to fail. 6: /// - IncludeTemplatesForSelection: If present, the item not being 7: /// based on an included template will cause validation to fail 8: /// 9: /// ExcludeTemplatesForSelection trumps IncludeTemplatesForSelection 10: /// if the same value appears in both lists. Lists are comma seperated 11: /// </summary> 12: [Serializable] 13: public class LinkItemTemplateValidator : StandardValidator 14: { 15: public LinkItemTemplateValidator() 16: { 17: } 18:   19: /// <summary> 20: /// Serialization constructor is required by the runtime 21: /// </summary> 22: /// <param name="info"></param> 23: /// <param name="context"></param> 24: public LinkItemTemplateValidator(SerializationInfo info, StreamingContext context) : base(info, context) { } 25:   26: /// <summary> 27: /// Returns whether the linked item meets the template 28: /// constraints specified in the parameters 29: /// </summary> 30: /// <returns> 31: /// The result of the evaluation. 32: /// </returns> 33: protected override ValidatorResult Evaluate() 34: { 35: if (string.IsNullOrWhiteSpace(ControlValidationValue)) 36: { 37: return ValidatorResult.Valid; // let "required" validation handle 38: } 39:   40: var excludeString = Parameters["ExcludeTemplatesForSelection"]; 41: var includeString = Parameters["IncludeTemplatesForSelection"]; 42: if (string.IsNullOrWhiteSpace(excludeString) && string.IsNullOrWhiteSpace(includeString)) 43: { 44: return ValidatorResult.Valid; // "allow anything" if no params 45: } 46:   47: Guid linkedItemGuid; 48: if (!Guid.TryParse(ControlValidationValue, out linkedItemGuid)) 49: { 50: return ValidatorResult.Valid; // probably put validator on wrong field 51: } 52:   53: var item = GetItem(); 54: var linkedItem = item.Database.GetItem(new ID(linkedItemGuid)); 55:   56: if (linkedItem == null) 57: { 58: return ValidatorResult.Valid; // this validator isn't for broken links 59: } 60:   61: var exclusionList = (excludeString ?? string.Empty).Split(','); 62: var inclusionList = (includeString ?? string.Empty).Split(','); 63:   64: if ((inclusionList.Length == 0 || inclusionList.Contains(linkedItem.TemplateName)) 65: && !exclusionList.Contains(linkedItem.TemplateName)) 66: { 67: return ValidatorResult.Valid; 68: } 69:   70: Text = GetText("The field \"{0}\" specifies an item which is based on template \"{1}\". This template is not valid for selection", GetFieldDisplayName(), linkedItem.TemplateName); 71:   72: return GetFailedResult(ValidatorResult.FatalError); 73: } 74:   75: protected override ValidatorResult GetMaxValidatorResult() 76: { 77: return ValidatorResult.FatalError; 78: } 79:   80: public override string Name 81: { 82: get { return @"LinkItemTemplateValidator"; } 83: } 84: }   In this blog entry, I have shared some code that I found useful in solving a problem that seemed fairly common.  Hopefully the next person that is looking for this answer finds it useful as well.

    Read the article

  • Packing a DBF

    - by Tom Hines
    I thought my days of dealing with DBFs as a "production data" source were over, but HA (no such luck). I recently had to retrieve, modify and replace some data that needed to be delivered in a DBF file. Everything was fine until I realized / remembered the DBF driver does not ACTUALLY delete records from the data source -- it only marks them for deletion.  You are responsible for handling the "chaff" either by using a utility to remove deleted records or by simply ignoring them.  If imported into Excel, the marked-deleted records are ignored, but the file size will reflect the extra content. So, I went hunting for a method to "Pack" the records (removing deleted ones and resizing the DBF file) and eventually ran across the FOXPRO driver at ( http://msdn.microsoft.com/en-us/vfoxpro/bb190233.aspx ).  Once installed, I changed the DSN in the code to the new one I created in the ODBC Administrator and ran some tests.  Using MSQuery, I simply tested the raw SQL command Pack {tablename} and it WORKED! One really neat thing is the PACK command is used like regular SQL instructions; "Pack {tablename}" is all that is needed. It is necessary, however, to close all connections to the database before issuing the PACK command.    Here is some C# code for a Pack method.         /// <summary>       /// Pack the DBF removing all deleted records       /// </summary>       /// <param name="strTableName">The table to pack</param>       /// <param name="strError">output of any errors</param>       /// <returns>bool (true if no errors)</returns>       public static bool Pack(string strTableName, ref string strError)       {          bool blnRetVal = true;          try          {             OdbcConnectionStringBuilder csbOdbc = new OdbcConnectionStringBuilder()             {                Dsn = "PSAP_FOX_DBF"             };             string strSQL = "pack " + strTableName;             using (OdbcConnection connOdbc = new OdbcConnection(csbOdbc.ToString()))             {                connOdbc.Open();                OdbcCommand cmdOdbc = new OdbcCommand(strSQL, connOdbc);                cmdOdbc.ExecuteNonQuery();                connOdbc.Close();             }          }          catch (Exception exc)          {             blnRetVal = false;             strError = exc.Message;          }          return blnRetVal;       }

    Read the article

  • Packing a DBF

    - by Tom Hines
    I thought my days of dealing with DBFs as a "production data" source were over, but HA (no such luck). I recently had to retrieve, modify and replace some data that needed to be delivered in a DBF file. Everything was fine until I realized / remembered the DBF driver does not ACTUALLY delete records from the data source -- it only marks them for deletion.  You are responsible for handling the "chaff" either by using a utility to remove deleted records or by simply ignoring them.  If imported into Excel, the marked-deleted records are ignored, but the file size will reflect the extra content.  After several rounds of testing CRUD, the output DBF was huge. So, I went hunting for a method to "Pack" the records (removing deleted ones and resizing the DBF file) and eventually ran across the FOXPRO driver at ( http://msdn.microsoft.com/en-us/vfoxpro/bb190233.aspx ).  Once installed, I changed the DSN in the code to the new one I created in the ODBC Administrator and ran some tests.  Using MSQuery, I simply tested the raw SQL command Pack {tablename} and it WORKED! One really neat thing is the PACK command is used like regular SQL instructions; "Pack {tablename}" is all that is needed. It is necessary, however, to close all connections to the database (and re-open) before issuing the PACK command or you will get the "File is in use" error.    Here is some C# code for a Pack method.         /// <summary>       /// Pack the DBF removing all deleted records       /// </summary>       /// <param name="strTableName">The table to pack</param>       /// <param name="strError">output of any errors</param>       /// <returns>bool (true if no errors)</returns>       public static bool Pack(string strTableName, ref string strError)       {          bool blnRetVal = true;          try          {             OdbcConnectionStringBuilder csbOdbc = new OdbcConnectionStringBuilder()             {                Dsn = "PSAP_FOX_DBF"             };             string strSQL = "pack " + strTableName;             using (OdbcConnection connOdbc = new OdbcConnection(csbOdbc.ToString()))             {                connOdbc.Open();                OdbcCommand cmdOdbc = new OdbcCommand(strSQL, connOdbc);                cmdOdbc.ExecuteNonQuery();                connOdbc.Close();             }          }          catch (Exception exc)          {             blnRetVal = false;             strError = exc.Message;          }          return blnRetVal;       }

    Read the article

  • Returning Identity Value in SQL Server: @@IDENTITY Vs SCOPE_IDENTITY Vs IDENT_CURRENT

    - by Arefin Ali
    We have some common misconceptions on returning the last inserted identity value from tables. To return the last inserted identity value we have options to use @@IDENTITY or SCOPE_IDENTITY or IDENT_CURRENT function depending on the requirement but it will be a real mess if anybody uses anyone of these functions without knowing exact purpose. So here I want to share my thoughts on this. @@IDENTITY, SCOPE_IDENTITY and IDENT_CURRENT are almost similar functions in terms of returning identity value. They all return values that are inserted into an identity column. Earlier in SQL Server 7 we used to use @@IDENTITY to return the last inserted identity value because those days we don’t have functions like SCOPE_IDENTITY or IDENT_CURRENT but now we have these three functions. So let’s check out which one responsible for what. IDENT_CURRENT returns the last inserted identity value in a particular table. It never depends on a connection or the scope of the insert statement. IDENT_CURRENT function takes a table name as parameter. Here is the syntax to get the last inserted identity value in a particular table using IDENT_CURRENT function. SELECT IDENT_CURRENT('Employee') Both the @@IDENTITY and SCOPE_IDENTITY return the last inserted identity value created in any table in the current session. But there is little difference between these two i.e. SCOPE_IDENTITY returns value inserted only within the current scope whereas @@IDENTITY is not limited to any particular scope. Here are the syntaxes to get the last inserted identity value using these functions SELECT @@IDENTITY SELECT SCOPE_IDENTITY() Now let’s have a look at the following example. Suppose I have two tables called Employee and EmployeeLog. CREATE TABLE Employee ( EmpId NUMERIC(18, 0) IDENTITY(1,1) NOT NULL, EmpName VARCHAR(100) NOT NULL, EmpSal FLOAT NOT NULL, DateOfJoining DATETIME NOT NULL DEFAULT(GETDATE()) ) CREATE TABLE EmployeeLog ( EmpId NUMERIC(18, 0) IDENTITY(1,1) NOT NULL, EmpName VARCHAR(100) NOT NULL, EmpSal FLOAT NOT NULL, DateOfJoining DATETIME NOT NULL DEFAULT(GETDATE()) ) I have an insert trigger defined on the table Employee which inserts a new record in the EmployeeLog whenever a record insert in the Employee table. So Suppose I insert a new record in the Employee table using following statement: INSERT INTO Employee (EmpName,EmpSal) VALUES ('Arefin','1') The trigger will be fired automatically and insert a record in EmployeeLog. Here the scope of the insert statement and the trigger are different. In this situation if I retrieve last inserted identity value using @@IDENTITY, it will simply return the identity value from the EmployeeLog because it’s not limited to a particular scope. Now if I want to get the Employee table’s identity value then I need to use SCOPE_IDENTITY in this scenario. So the moral is always use SCOPE_IDENTITY to return the identity value of a recently created record in a sql statement or stored procedure. It’s safe and ensures bug free code.

    Read the article

  • Returning Identity Value in SQL Server: @@IDENTITY Vs SCOPE_IDENTITY Vs IDENT_CURRENT

    - by Arefin Ali
    We have some common misconceptions on returning the last inserted identity value from tables. To return the last inserted identity value we have options to use @@IDENTITY or SCOPE_IDENTITY or IDENT_CURRENT function depending on the requirement but it will be a real mess if anybody uses anyone of these functions without knowing exact purpose. So here I want to share my thoughts on this. @@IDENTITY, SCOPE_IDENTITY and IDENT_CURRENT are almost similar functions in terms of returning identity value. They all return values that are inserted into an identity column. Earlier in SQL Server 7 we used to use @@IDENTITY to return the last inserted identity value because those days we don’t have functions like SCOPE_IDENTITY or IDENT_CURRENT but now we have these three functions. So let’s check out which one responsible for what. IDENT_CURRENT returns the last inserted identity value in a particular table. It never depends on a connection or the scope of the insert statement. IDENT_CURRENT function takes a table name as parameter. Here is the syntax to get the last inserted identity value in a particular table using IDENT_CURRENT function. SELECT IDENT_CURRENT('Employee') Both the @@IDENTITY and SCOPE_IDENTITY return the last inserted identity value created in any table in the current session. But there is little difference between these two i.e. SCOPE_IDENTITY returns value inserted only within the current scope whereas @@IDENTITY is not limited to any particular scope. Here are the syntaxes to get the last inserted identity value using these functions SELECT @@IDENTITYSELECT SCOPE_IDENTITY() Now let’s have a look at the following example. Suppose I have two tables called Employee and EmployeeLog. CREATE TABLE Employee( EmpId NUMERIC(18, 0) IDENTITY(1,1) NOT NULL, EmpName VARCHAR(100) NOT NULL, EmpSal FLOAT NOT NULL, DateOfJoining DATETIME NOT NULL DEFAULT(GETDATE()))CREATE TABLE EmployeeLog( EmpId NUMERIC(18, 0) IDENTITY(1,1) NOT NULL, EmpName VARCHAR(100) NOT NULL, EmpSal FLOAT NOT NULL, DateOfJoining DATETIME NOT NULL DEFAULT(GETDATE())) I have an insert trigger defined on the table Employee which inserts a new record in the EmployeeLog whenever a record insert in the Employee table. So Suppose I insert a new record in the Employee table using following statement: INSERT INTO Employee (EmpName,EmpSal) VALUES ('Arefin','1') The trigger will be fired automatically and insert a record in EmployeeLog. Here the scope of the insert statement and the trigger are different. In this situation if I retrieve last inserted identity value using @@IDENTITY, it will simply return the identity value from the EmployeeLog because it’s not limited to a particular scope. Now if I want to get the Employee table’s identity value then I need to use SCOPE_IDENTITY in this scenario. So the moral is always use SCOPE_IDENTITY to return the identity value of a recently created record in a sql statement or stored procedure. It’s safe and ensures bug free code.

    Read the article

  • Earthquake Locator - Live Demo and Source Code

    - by Bobby Diaz
    Quick Links Live Demo Source Code I finally got a live demo up and running!  I signed up for a shared hosting account over at discountasp.net so I could post a working version of the Earthquake Locator application, but ran into a few minor issues related to RIA Services.  Thankfully, Tim Heuer had already encountered and explained all of the problems I had along with solutions to these and other common pitfalls.  You can find his blog post here.  The ones that got me were the default authentication tag being set to Windows instead of Forms, needed to add the <baseAddressPrefixFilters> tag since I was running on a shared server using host headers, and finally the Multiple Authentication Schemes settings in the IIS7 Manager.   To get the demo application ready, I pulled down local copies of the earthquake data feeds that the application can use instead of pulling from the USGS web site.  I basically added the feed URL as an app setting in the web.config:       <appSettings>         <!-- USGS Data Feeds: http://earthquake.usgs.gov/earthquakes/catalogs/ -->         <!--<add key="FeedUrl"             value="http://earthquake.usgs.gov/earthquakes/catalogs/1day-M2.5.xml" />-->         <!--<add key="FeedUrl"             value="http://earthquake.usgs.gov/earthquakes/catalogs/7day-M2.5.xml" />-->         <!--<add key="FeedUrl"             value="~/Demo/1day-M2.5.xml" />-->         <add key="FeedUrl"              value="~/Demo/7day-M2.5.xml" />     </appSettings> You will need to do the same if you want to run from local copies of the feed data.  I also made the following minor changes to the EarthquakeService class so that it gets the FeedUrl from the web.config:       private static readonly string FeedUrl = ConfigurationManager.AppSettings["FeedUrl"];       /// <summary>     /// Gets the feed at the specified URL.     /// </summary>     /// <param name="url">The URL.</param>     /// <returns>A <see cref="SyndicationFeed"/> object.</returns>     public static SyndicationFeed GetFeed(String url)     {         SyndicationFeed feed = null;           if ( !String.IsNullOrEmpty(url) && url.StartsWith("~") )         {             // resolve virtual path to physical file system             url = System.Web.HttpContext.Current.Server.MapPath(url);         }           try         {             log.Debug("Loading RSS feed: " + url);               using ( var reader = XmlReader.Create(url) )             {                 feed = SyndicationFeed.Load(reader);             }         }         catch ( Exception ex )         {             log.Error("Error occurred while loading RSS feed: " + url, ex);         }           return feed;     } You can now view the live demo or download the source code here, but be sure you have WCF RIA Services installed before running the application locally and make sure the FeedUrl is pointing to a valid location.  Please let me know if you have any comments or if you run into any issues with the code.   Enjoy!

    Read the article

  • Developing your Data Access Layer with ADO.NET Entity Framework 4

    Entity Framework has evolved in the light of feedback. ADO.NET Entity Framework 4 is now better able to accommodate different development methodologies. A welcome improvement is the way that, the application designer now has a range of options in the way that Entity Framework creates the Data Access layer. Prasanna returns to Simple-Talk to explain the significance of the changes.

    Read the article

< Previous Page | 74 75 76 77 78 79 80 81 82 83 84 85  | Next Page >