Search Results

Search found 16429 results on 658 pages for 'account names'.

Page 183/658 | < Previous Page | 179 180 181 182 183 184 185 186 187 188 189 190  | Next Page >

  • How can we implement change notification propagation for WPF and SL in the MVVM pattern?

    - by Firoso
    Here's an example scenario targetting MVVM WPF/SL development: View data binds to view model Property "Target" "Target" exposes a field of an object called "data" that exists in the local application model, called "Original" when "Original" changes, it should raise notification to the view model and then propogate that change notification to the View. Here are the solutions I've come up with, but I don't like any of them all that much. I'm looking for other ideas, by the time we come up with something rock solid I'm certain Microsoft will have released .NET 5 with WPF/SL extensions for better tools for MVVM development. For now the question is, "What have you done to solve this problem and how has it worked out for you?" Option 1. Proposal: Attach a handler to data's PropertyChanged event that watches for string values of properties it cares about that might have changed, and raises the appropriate notification. Why I don't like it: Changes don't bubble naturally, objects must be explicitly watched, if data changes to a new source, events must be un-registered/registered. Why I kind of like it: I get explicit control over propogation of changes, and I don't have to use any types that belong at a higher level of the application such as dependancy properties. Option 2. Proposal: Attach a handler to data's PropertyChanged event that re-raises the event across all properties using the name property name. Why I don't like it: This is essentially the same as option 1, but less intelligent, and forces me to never change my property names, as they have to be the same as the property names on data Why I kind of like it: It's very easy to set up and I don't have to think about it... Then again if I try to think, and change names to things that make sense, I shoot myself in the foot, and then I have to think about it! Option 3. Proposal: Inherit my view model from dependancy object, and notify binding sources of changes directly. Why I don't like it: I'm not even 100% sure dependancy properties/objects can DO this, it was just a thought to look into. Also I don't personally feel that WPF/SL types like Dep Obj belong at the view model level. Why I kind of like it: IF it has the capability that I'm seeking then it's a good answer! minus that pesky layering issue. Option 4. Proposal: Use a consistant agent messaging system based off of Task Parallels DataFlow Library to propogate everything through linked pipelining. Why I don't like it: I've never tried this, and somehow I think it will be lacking, plus it requires me to think about my code completely differently all the way around. Why I kind of like it: It has the possiblity of allowing me to do some VERY fun manipulations with a push based data model and using ActionBlocks as validation AND setters to then privately change view model properties and explicitly control PropertyChanged notifications.

    Read the article

  • Run word on server for COM to work??

    - by chupinette
    I got this from php.net website. This is related to the problem I am having with tho code below. Can anyone explain me what the following does. I am using Vista. What does running Word on server implies? In order to get the Word example running, do the following on the server side. Worked for me... 1. Click START--RUN and enter "dcomcnfg" 2. In the "Applications" tab, go down to "Microsoft Word Document" 3. Click PROPERTIES button 4. Go to the "Security" Tab 5. Click "Use custom access permissions", and then click EDIT 6. Click ADD and then click SHOW USERS 7. Highlight the IIS anonymous user account (usually IUSR_), click ADD 8. Go back to the "Security" tab by hitting OK 9. Click "Use custom launch permissions", and the click EDIT 10. Click ADD and then click SHOW USERS 11. Highlight the IIS anonymous user account (usually IUSR_), click ADD 12. Hit OK, and then hit APPLY. Also, you should look at the "Identity" tab in the Microsoft Word Document PROPERTIES and see that it is set to "Interactive User" ALSO, log into the machine AS the IUSR_ account, start word, and make sure to click through the dialog boxes that Word shows the first time it is run for a certain user. In other words, make sure Word opens cleanly for the IUSR_ user. <?php // starting word $word = new COM("word.application") or die("Unable to instantiate Word"); echo "Loaded Word, version {$word->Version}\n"; //bring it to front $word->Visible = 1; //open an empty document $word->Documents->Add(); //do some weird stuff $word->Selection->TypeText("This is a test..."); $word->Documents[1]->SaveAs("Useless test.doc"); //closing word $word->Quit(); //free the object $word = null; ?>

    Read the article

  • advice on logging and sharing in via facebook, twitter, livejournal, etc on the iPhone

    - by Tristan
    Hi. I would like to enable my iPhone app users to share content via services like Facebook, Twitter and as many others as possible. It would also be great to allow them to use their Twitter/Facebook/Myspace/etc account to sign in to my app, rather than requiring them to create a new account on my server. Currently I'm interfacing with each of them individually, but I would like to use a service like Gigya (www.gigya.com) or (www.rpxnow.com) to allow me to use many more services (eg digg.com, livejournal, etc) without writing code to interact with every one of them. How would you advise doing this? Thanks, Tristan

    Read the article

  • Implementing Security on custom BCS/.net class?

    - by Michael Stum
    I'm implementing a custom BCS Model to get data from a backend system. As the backend uses it's own user management, I'm accessing it through a service account. All of this works well and allows me to pull data into SharePoint. However because it's channeled through the service account, everyone can access it, which is bad. Can anyone give me some tips which method to implement? The backend does not give me NT ACLs, but I wonder if I could just "fake" them somehow? (Essentially saying "This NT Group has Read Access" is good enough). I am aware of ISecurityTrimmer2 for Search Results, but ideally I want to cover security inside the BCS Model so that it applies to external lists as well. I want to avoid using Secure storage and mapping each individual user to the backend.

    Read the article

  • SQL Server database backup: Network Service file access

    - by Keith Maurino
    When trying to run the following database backup command from my code I get an "Operating system error 5(Access is denied.)" error. This is because the log on account for the SQL Server Windows Service is 'Network Service' and that does not have access to right to this folder. BACKUP DATABASE [AE3DB] TO DISK = 'c:\AE3\backup\AE3DB.bak' My question is, from my code how would I go about figuring out where on the C drive 'Network Service' is allowed to right the backup to? NOTE: This is a distributed application so I cannot easily change the log on for the SQL Server Windows Service to the 'Local System' account that would be able to right to that folder.

    Read the article

  • Difficulties signing in to Twitter from iPhone

    - by Tristan
    Hi there, I'm using MGTwitterEngine to add Twitter functionality to my app. It's very easy to simply prompt the user for a username and password and then start posting. Strangely, however, I've noticed other apps (eg Foursquare and Brightkite) require you to visit their website to associate your Twitter account with your foursquare/brightkite/whatever account. Why do they do it this way? Is there a reason why my app shouldn't prompt the user for a Twitter username and password, even though it would be so easy? Thanks a bunch! Tristan

    Read the article

  • solved: puppet master REST API returns 403 when running under passenger works when master runs from command line

    - by Anadi Misra
    I am using the standard auth.conf provided in puppet install for the puppet master which is running through passenger under Nginx. However for most of the catalog, files and certitifcate request I get a 403 response. ### Authenticated paths - these apply only when the client ### has a valid certificate and is thus authenticated # allow nodes to retrieve their own catalog path ~ ^/catalog/([^/]+)$ method find allow $1 # allow nodes to retrieve their own node definition path ~ ^/node/([^/]+)$ method find allow $1 # allow all nodes to access the certificates services path ~ ^/certificate_revocation_list/ca method find allow * # allow all nodes to store their reports path /report method save allow * # unconditionally allow access to all file services # which means in practice that fileserver.conf will # still be used path /file allow * ### Unauthenticated ACL, for clients for which the current master doesn't ### have a valid certificate; we allow authenticated users, too, because ### there isn't a great harm in letting that request through. # allow access to the master CA path /certificate/ca auth any method find allow * path /certificate/ auth any method find allow * path /certificate_request auth any method find, save allow * path /facts auth any method find, search allow * # this one is not stricly necessary, but it has the merit # of showing the default policy, which is deny everything else path / auth any Puppet master however does not seems to be following this as I get this error on client [amisr1@blramisr195602 ~]$ sudo puppet agent --no-daemonize --verbose --server bangvmpllda02.XXXXX.com [sudo] password for amisr1: Starting Puppet client version 3.0.1 Warning: Unable to fetch my node definition, but the agent run will continue: Warning: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /certificate_revocation_list/ca [find] at :110 Info: Retrieving plugin Error: /File[/var/lib/puppet/lib]: Failed to generate additional resources using 'eval_generate: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /file_metadata/plugins [search] at :110 Error: /File[/var/lib/puppet/lib]: Could not evaluate: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /file_metadata/plugins [find] at :110 Could not retrieve file metadata for puppet://devops.XXXXX.com/plugins: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /file_metadata/plugins [find] at :110 Error: Could not retrieve catalog from remote server: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /catalog/blramisr195602.XXXXX.com [find] at :110 Using cached catalog Error: Could not retrieve catalog; skipping run Error: Could not send report: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /report/blramisr195602.XXXXX.com [save] at :110 and the server logs show XX.XXX.XX.XX - - [10/Dec/2012:14:46:52 +0530] "GET /production/certificate_revocation_list/ca? HTTP/1.1" 403 102 "-" "Ruby" XX.XXX.XX.XX - - [10/Dec/2012:14:46:52 +0530] "GET /production/file_metadatas/plugins?links=manage&recurse=true&&ignore=---+%0A++-+%22.svn%22%0A++-+CVS%0A++-+%22.git%22&checksum_type=md5 HTTP/1.1" 403 95 "-" "Ruby" XX.XXX.XX.XX - - [10/Dec/2012:14:46:52 +0530] "GET /production/file_metadata/plugins? HTTP/1.1" 403 93 "-" "Ruby" XX.XXX.XX.XX - - [10/Dec/2012:14:46:53 +0530] "POST /production/catalog/blramisr195602.XXXXX.com HTTP/1.1" 403 106 "-" "Ruby" XX.XXX.XX.XX - - [10/Dec/2012:14:46:53 +0530] "PUT /production/report/blramisr195602.XXXXX.com HTTP/1.1" 403 105 "-" "Ruby" thefile server conf file is as follows (and goin by what they say on puppet site, It is better to regulate access in auth.conf for reaching file server and then allow file server to server all) [files] path /apps/puppet/files allow * [private] path /apps/puppet/private/%H allow * [modules] allow * I am using server and client version 3 Nginx has been compiled using the following options nginx version: nginx/1.3.9 built by gcc 4.4.6 20120305 (Red Hat 4.4.6-4) (GCC) TLS SNI support enabled configure arguments: --prefix=/apps/nginx --conf-path=/apps/nginx/nginx.conf --pid-path=/apps/nginx/run/nginx.pid --error-log-path=/apps/nginx/logs/error.log --http-log-path=/apps/nginx/logs/access.log --with-http_ssl_module --with-http_gzip_static_module --add-module=/usr/lib/ruby/gems/1.8/gems/passenger-3.0.18/ext/nginx --add-module=/apps/Downloads/nginx/nginx-auth-ldap-master/ and the standard nginx puppet master conf server { ssl on; listen 8140 ssl; server_name _; passenger_enabled on; passenger_set_cgi_param HTTP_X_CLIENT_DN $ssl_client_s_dn; passenger_set_cgi_param HTTP_X_CLIENT_VERIFY $ssl_client_verify; passenger_min_instances 5; access_log logs/puppet_access.log; error_log logs/puppet_error.log; root /apps/nginx/html/rack/public; ssl_certificate /var/lib/puppet/ssl/certs/bangvmpllda02.XXXXXX.com.pem; ssl_certificate_key /var/lib/puppet/ssl/private_keys/bangvmpllda02.XXXXXX.com.pem; ssl_crl /var/lib/puppet/ssl/ca/ca_crl.pem; ssl_client_certificate /var/lib/puppet/ssl/certs/ca.pem; ssl_ciphers SSLv2:-LOW:-EXPORT:RC4+RSA; ssl_prefer_server_ciphers on; ssl_verify_client optional; ssl_verify_depth 1; ssl_session_cache shared:SSL:128m; ssl_session_timeout 5m; } Puppet is picking up the correct settings from the files mentioned because config print command points to /etc/puppet [amisr1@bangvmpllDA02 puppet]$ sudo puppet config print | grep conf async_storeconfigs = false authconfig = /etc/puppet/namespaceauth.conf autosign = /etc/puppet/autosign.conf catalog_cache_terminus = store_configs confdir = /etc/puppet config = /etc/puppet/puppet.conf config_file_name = puppet.conf config_version = "" configprint = all configtimeout = 120 dblocation = /var/lib/puppet/state/clientconfigs.sqlite3 deviceconfig = /etc/puppet/device.conf fileserverconfig = /etc/puppet/fileserver.conf genconfig = false hiera_config = /etc/puppet/hiera.yaml localconfig = /var/lib/puppet/state/localconfig name = config rest_authconfig = /etc/puppet/auth.conf storeconfigs = true storeconfigs_backend = puppetdb tagmap = /etc/puppet/tagmail.conf thin_storeconfigs = false I checked the firewall rules on this VM; 80, 443, 8140, 3000 are allowed. Do I still have to tweak any specifics to auth.conf for getting this to work? Update I added verbose logging to the puppet master and restarted nginx; here's the additional info I see in logs Mon Dec 10 18:19:15 +0530 2012 Puppet (err): Could not resolve 10.209.47.31: no name for 10.209.47.31 Mon Dec 10 18:19:15 +0530 2012 access[/] (info): defaulting to no access for 10.209.47.31 Mon Dec 10 18:19:15 +0530 2012 Puppet (warning): Denying access: Forbidden request: 10.209.47.31(10.209.47.31) access to /file_metadata/plugins [find] at :111 Mon Dec 10 18:19:15 +0530 2012 Puppet (err): Forbidden request: 10.209.47.31(10.209.47.31) access to /file_metadata/plugins [find] at :111 10.209.47.31 - - [10/Dec/2012:18:19:15 +0530] "GET /production/file_metadata/plugins? HTTP/1.1" 403 93 "-" "Ruby" On the agent machine facter fqdn and hostname both return a fully qualified host name [amisr1@blramisr195602 ~]$ sudo facter fqdn blramisr195602.XXXXXXX.com I then updated the agent configuration to add dns_alt_names = 10.209.47.31 cleaned all certificates on master and agent and regenerated the certificates and signed them on master using the option --allow-dns-alt-names [amisr1@bangvmpllDA02 ~]$ sudo puppet cert sign blramisr195602.XXXXXX.com Error: CSR 'blramisr195602.XXXXXX.com' contains subject alternative names (DNS:10.209.47.31, DNS:blramisr195602.XXXXXX.com), which are disallowed. Use `puppet cert --allow-dns-alt-names sign blramisr195602.XXXXXX.com` to sign this request. [amisr1@bangvmpllDA02 ~]$ sudo puppet cert --allow-dns-alt-names sign blramisr195602.XXXXXX.com Signed certificate request for blramisr195602.XXXXXX.com Removing file Puppet::SSL::CertificateRequest blramisr195602.XXXXXX.com at '/var/lib/puppet/ssl/ca/requests/blramisr195602.XXXXXX.com.pem' however, that doesn't help either; I get same errors as before. Not sure why in the logs it shows comparing access rules by IP and not hostname. Is there any Nginx configuration to change this behavior?

    Read the article

  • What does sub error code 568 mean for Ldap Error 49 with Active Directory

    - by Dean Povey
    I am writing some Java code that authenticates to Active Directory using SASL GSSAPI. Mostly this code is working fine but for one user I am getting the response: javax.naming.AuthenticationException: [LDAP: error code 49 - 8 0090304: LdapErr: DSID-0C0904D1, comment: AcceptSecurityContext error, data 568, v1772 ] I know that 49 means this is an authentication failure, and that the relevant sub code is 568, but I am only aware of the following meanings for that data: 525 - user not found 52e - invalid credentials 530 - not permitted to logon at this time 532 - password expired 533 - account disabled 701 - account expired 773 - user must reset password So far I am unable to find an authorative source of these error codes from Microsoft (this list is pieced together from forum posts) and I can't find anything for that 568 error. Does anyone know what it means?

    Read the article

  • How to Grant IIS 7.5 access to a certificate in certificate store?

    - by thames
    In Windows 2003 it was simple to do and one could use the winhttpcertcfg.exe (download) to give "NETWORK SERVICE" account access to a certificate. I'm now using Windows Server 2008 R2 with IIS 7.5 and I am unable to find where and how to set permissions access permissions to a certificate in the certificate store. This Post showed how to do it in Vista and that winhttpcertcfg features were added into the certificates mmc however it doesn't seem to work with imported certificates or doesn't work anymore on Server 2008 R2. So does anyone have any idea on how give IIS 7.5 the correct permissions to read a certificate from the certificate store? And also what account from IIS 7.5 that needs the permission.

    Read the article

  • How do I shrink my SQL Server Database?

    - by Rory Becker
    I have a Database nearly 1.9Gb Database in size MSDE2000 does not allow DBs that exceed 2.0Gb I need to shrink this DB (and many like it at various client locations) I have found and deleted many 100's of 1000's of records which are considered unneeded. these records account for a large percentage of some of the main (largest) tables in the Database. Therefore it's reasonable to assume much space should now be retrievable. So now I need to shrink the DB to account for the missing records I execute "DBCC ShrinkDatabase('MyDB')"......No effect. I have tried the various shrink facilities provided in MSSMS.... Still no effect. I have backed up the database and restored it... Still no effect. Still 1.9Gb Why? Whatever procedure I eventually find needs to be replayable on a client machine with access to nothing other than OSql or similar.

    Read the article

  • mysql naming convention

    - by Lizard
    I have generally always used some sort of Hungarian Notation for my field names in my tables e.g. #Table Users u_id, u_name, u_email etc... #Posts p_id, p_u_id, p_title, p_content etc... But I have recently been told that this isn't best practice. Is there a more standard way of doing this? I haven't really liked just using the field id as this is then requirs you to select table.field for fields names that appear in mutliple tables when using joins etc. Your thoughts on what is best practice would be appreciated.

    Read the article

  • Basic Ruby on Rails Question about routing

    - by Acidburn2k
    I have a controller without any related model. This controller is to span some informations from various models. I have lots of actions there, which define certain views on the page. What would be the best way to organize routes for this controller. What I would like is to have /dashboard/something point to any action in the dashboard controller. Not actions like new/edit but arbitrary (showstats, etc). With trail and error I made something like this: map.dashboard 'dashboard/:action', :controller => 'dashboard', :action => :action Now it is possible to access those url using helper: dashboard_url('actionname') This approch seems to be working ok, but is this the way to go? I am not quite sure understand, how are the helper method names generated. How to generate same helper names as in basic controllers "action_controller_url" ? That would be more generic and made the code more consistent. Thanks in advance.

    Read the article

  • Regex pattern problem in python

    - by mridang
    I need to extract parts of a string using regex in Python. I'm good with basic regex but I'm terrible at lookarounds. I've shown two sample records below. The last big is always a currency field e.g. in the first one it is 4,76. In the second one it is 2,00. The second has an account number that is the pattern of \d{6}-\d{6}. Anything after that is the currency. 24.02 24.02VALINTATALO MEGAHERTSI4,76- 24.02 24.02DOE MRIDANG 157235-1234582,00- Could you help me out with this regex? What I've written so far is given below but it considers everything after the 'dash' in the account number to be the currency. .*?(\d\d\.\d\d)(.*?)\s*(?<!\d{6}-\d{6})(\d*,\d\d) Thanks in advance

    Read the article

  • Scrapy spider is not working

    - by Zeynel
    Since nothing so far is working I started a new project with python scrapy-ctl.py startproject Nu I followed the tutorial exactly, and created the folders, and a new spider from scrapy.contrib.spiders import CrawlSpider, Rule from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor from scrapy.selector import HtmlXPathSelector from scrapy.item import Item from Nu.items import NuItem from urls import u class NuSpider(CrawlSpider): domain_name = "wcase" start_urls = ['http://www.whitecase.com/aabbas/'] names = hxs.select('//td[@class="altRow"][1]/a/@href').re('/.a\w+') u = names.pop() rules = (Rule(SgmlLinkExtractor(allow=(u, )), callback='parse_item'),) def parse(self, response): self.log('Hi, this is an item page! %s' % response.url) hxs = HtmlXPathSelector(response) item = Item() item['school'] = hxs.select('//td[@class="mainColumnTDa"]').re('(?<=(JD,\s))(.*?)(\d+)') return item SPIDER = NuSpider() and when I run C:\Python26\Scripts\Nu>python scrapy-ctl.py crawl wcase I get [Nu] ERROR: Could not find spider for domain: wcase The other spiders at least are recognized by Scrapy, this one is not. What am I doing wrong? Thanks for your help!

    Read the article

  • In Facebook: How to get User's list_ of_friends_ ID when User is offline[not Sign In].

    - by Vaibhav Bhalke
    Hi All In Facebook: How to get User's list_ of_friends_ ID when User is offline[not Sign In]. We are Integrating facebook application in our website.Our website development in Java's GWT[Googlw Web Toolkit] FrameWork. We are refering "Gwittit" sample codeWe open facebook account for our website and want to show all users[friends] photo conected to Website's FACEBOOK account when FB A/C is offline [not sign in]. We have used apiclient.getFriendList() in that we get list of all user id [with photo]connected to Our website's fb a/c.But Problem is that we have to sign in first and we don't want that Is there any way to solve this problem?

    Read the article

  • Jquery ajax success function returns null?

    - by udaya
    I use the following the jquery statements to call my php controller function, it gets called but my result is not returned to my success function.... function getRecordspage() { $.ajax({ type: "POST", url:"http://localhost/codeigniter_cup_myth/index.php/adminController/mainAccount", data: "{}", contentType: "application/json; charset=utf-8", global:false, async: true, dataType: "json", success: function(result) { alert(result); } }); } My controller method, function mainAccount() { $_SESSION['menu'] = 'finance'; $data['account'] = $this->adminmodel->getaccountDetails(); if(empty($data['account'])) { $data['comment'] = 'No record found !'; } $json = json_encode($data); return $json; } I get the alert(1); in my success function but my alert(result); show null... Any suggestion...

    Read the article

  • C# Winforms ADO.NET - DataGridView INSERT starting with null data

    - by Geo Ego
    I have a C# Winforms app that is connecting to a SQL Server 2005 database. The form I am currently working on allows a user to enter a large amount of different types of data into various textboxes, comboboxes, and a DataGridView to insert into the database. It all represents one particular type of machine, and the data is spread out over about nine tables. The problem I have is that my DataGridView represents a type of data that may or may not be added to the database. Thus, when the DataGridView is created, it is empty and not databound, and so data cannot be entered. My question is, should I create the table with hard-coded field names representing the way that the data looks in the database, or is there a way to simply have the column names populate with no data so that the user can enter it if they like? I don't like the idea of hard-coding them in case there is a change in the database schema, but I'm not sure how else to deal with this problem.

    Read the article

  • LINQ OrderBy: best search results at top of results list

    - by p.campbell
    Consider the need to search a list of Customer by both first and last names. The desire is to have the results list sorted by the Customer with the most matches in the search terms. FirstName LastName ---------- --------- Foo Laurie Bar Jackson Jackson Bro Laurie Foo Jackson Laurie string[] searchTerms = new string[] {"Jackson", "Laurie"}; //want to find those customers with first, last or BOTH names in the searchTerms var matchingCusts = Customers .Where(m => searchTerms.Contains(m.FirstName) || searchTerms.Contains(m.LastName)) .ToList(); /* Want to sort for those results with BOTH FirstName and LastName matching in the search terms. Those that match on both First and Last should be at the top of the results, the rest who match on one property should be below. */ return matchingCusts.OrderBy(m=>m); Desired Sort: Jackson Laurie (matches on both properties) Foo Laurie Bar Jackson Jackson Bro Laurie Foo How can I achieve this desired functionality with LINQ and OrderBy / OrderByDescending?

    Read the article

  • XmlDocument from LINQ to XML query

    - by Ben
    I am loading an XML document into an XDocument object, doing a query and then returning the data through a web service as an XmlDocument object. The code below works fine, but it just seems a bit smelly. Is there a cleaner way to take the results of the query and convert back to an XDocument or XmlDocument? XDocument xd = XDocument.Load(Server.MapPath(accountsXml)); var accounts = from x in xd.Descendants("AccountsData") where userAccounts.Contains(x.Element("ACCOUNT_REFERENCE").Value) select x; XDocument xd2 = new XDocument( new XDeclaration("1.0", "UTF-8", "yes"), new XElement("Accounts") ); foreach (var account in accounts) xd2.Element("Accounts").Add(account); return xd2.ToXmlDocument();

    Read the article

  • Signing in to Twitter from iPhone - is the cake poisoned?

    - by Tristan
    Hi there, I'm using MGTwitterEngine to add Twitter functionality to my app. It's very easy to simply prompt the user for a username and password and then start posting. Strangely, however, I've noticed other apps (eg Foursquare and Brightkite) require you to visit their website to associate your Twitter account with your foursquare/brightkite/whatever account. Why do they do it this way? Is there a reason why my app shouldn't prompt the user for a Twitter username and password, even though it would be so easy? Thanks a bunch! Tristan

    Read the article

  • Algorithm for disordered sequences of strings

    - by Kinopiko
    The Levenshtein distance gives us a way to calculate the distance between two similar strings in terms of disordered individual characters: quick brown fox quikc brown fax The Levenshtein distance = 3. What is a similar algorithm for the distance between two strings with similar subsequences? For example, in quickbrownfox brownquickfox the Levenshtein distance is 10, but this takes no account of the fact that the strings have two similar subsequences, which makes them more "similar" than completely disordered words like quickbrownfox qburiocwknfox and yet the completely disordered version has a Levenshtein distance of eight. What distance measures exist which take the length of subsequences into account, without assuming that the subsequences can be easily broken into distinct words?

    Read the article

  • Encoding problem with preg_replace() and scandir()

    - by itarato
    Hi, On OS-X (PHP5.2.11) I have a file: siësta.doc (and thousand other with Unicode filenames) and I want to convert the file names to a web-consumable format (a-zA-Z0-9.). If I hardcode the file name above I can do the right conversion: <?php $file = 'siësta.doc'; echo preg_replace("/[^a-zA-Z0-9.]/u", '_', $file); // Output: si_sta.doc ?> But if I read the file names with scandir, I've got strange conversions: <?php $files = scandir(DIRNAME); foreach ($files as $file) { echo preg_replace("/[^a-zA-Z0-9.]/u", '_', $file); // Output for the file above: sie_sta.doc } ?> I tried to detect the encoding, set the encoding, convert it with iconv functions. I tried the mb_ functions also. But it was just worse. What did I do wrong? Thanks in advance

    Read the article

  • dynamic log4net appender name?

    - by sanjeev40084
    Let's say i have 3 smtp appenders in same log4net file whose names are: <appender name = "emailDevelopment".. /> <appender name = "emailBeta".. /> <appender name = "emailProduction".. /> Let's say i have 3 different servers(Dev, Beta, Production). Depending upon the server, i want to fire the log. In case of Development server, it would fire log from "emailDevelopment". I have a system variable in each server named "ApplicationEnvironment" whose value is Development, Beta, Production based on the server names. Now is there anyway i can setup root in log4net so that it fires email depending upon the server name. <root> <priority value="ALL" /> <appender-ref ref="email<environment name from whose appender should be used>" /> </root>

    Read the article

  • PLT Scheme Extracting field ids from structures

    - by Steve Knight
    I want to see if I can map PLT Scheme structure fields to columns in a DB. I've figured out how to extract accessor functions from structures in PLT scheme using the fourth return value of: (struct-type-info) However the returned procedure indexes into the struct using an integer. Is there some way that I can find out what the field names were at point of definition? Looking at the documentation it seems like this information is "forgotten" after the structure is defined and exists only via the generated-accessor functions: (<id>-<field-id> s). So I can think of two possible solutions: Search the namespace symbols for ones that start with my struct name (yuk); Define a custom define-struct macro that captures the ordered sequence of field-names inside some hash that is keyed by struct name (eek).

    Read the article

< Previous Page | 179 180 181 182 183 184 185 186 187 188 189 190  | Next Page >