Search Results

Search found 2646 results on 106 pages for 'fetch'.

Page 58/106 | < Previous Page | 54 55 56 57 58 59 60 61 62 63 64 65  | Next Page >

  • Object oriented EDI handling in PHP

    - by Robert van der Linde
    I'm currently starting a new sub project where I will: Retrieve the order information from our mainframe Save the order information to our web-apps' database Send the order as EDI (either D01b or D93a) Receive the order response, despatch advice and invoice messages Do all kinds of fun things with the resulting datasets. However I am struggling with my initial class designs. The order information will be retrieved from the mainframe which will result in a "AOrder" class, this isn't a problem, I am not sure about how to mold this local object into an EDI string. Should I create EDIOrder/EDIOrderResponse/etc classes with matching decorators (EDIOrderD01BDecorator, EDIOrderD93ADecorator)? Do I need builder objects or can I do: // $myOrder is instance of AOrder $myOrder->toEDIOrder(); $decorator = new EDIOrderD01BDecorator($myOrder); $edi = $decorator->getEDIString(); And it'll have to work the other way around as well. Is the following code a good way of handling this problem or should I go about this differently? $ediString = $myEDIMessageBroker->fetch(); $ediOrderResponse = EDIOrderResponse::fromString($ediString); I'm just not so sure about how I should go about designing the classes and interactions between them. Thanks for reading and helping.

    Read the article

  • SharePoint 2010 PowerShell Script to Find All SPShellAdmins with Database Name

    - by Brian Jackett
    Problem     Yesterday on Twitter my friend @cacallahan asked for some help on how she could get all SharePoint 2010 SPShellAdmin users and the associated database name.  I spent a few minutes and wrote up a script that gets this information and decided I’d post it here for others to enjoy.     Background     The Get-SPShellAdmin commandlet returns a listing of SPShellAdmins for the given database Id you pass in, or the farm configuration database by default.  For those unfamiliar, SPShellAdmin access is necessary for non-admin users to run PowerShell commands against a SharePoint 2010 farm (content and configuration databases specifically).  Click here to read an excellent guest post article my friend John Ferringer (twitter) wrote on the Hey Scripting Guy! blog regarding granting SPShellAdmin access.  Solution     Below is the script I wrote (formatted for space and to include comments) to provide the information needed. Click here to download the script.   # declare a hashtable to store results $results = @{}   # fetch databases (only configuration and content DBs are needed) $databasesToQuery = Get-SPDatabase | Where {$_.Type -eq 'Configuration Database' -or $_.Type -eq 'Content Database'}   # for each database get spshelladmins and add db name and username to result $databasesToQuery | ForEach-Object {$dbName = $_.Name; Get-SPShellAdmin -database $_.id | ForEach-Object {$results.Add($dbName, $_.username)}}   # sort results by db name and pipe to table with auto sizing of col width $results.GetEnumerator() | Sort-Object -Property Name | ft -AutoSize     Conclusion     In this post I provided a script that outputs all of the SPShellAdmin users and the associated database names in a SharePoint 2010 farm.  Funny enough it actually took me longer to boot up my dev VM and PowerShell (~3 mins) than it did to write the first working draft of the script (~2 mins).  Feel free to use this script and modify as needed, just be sure to give credit back to the original author.  Let me know if you have any questions or comments.  Enjoy!         -Frog Out   Links PowerShell Hashtables http://technet.microsoft.com/en-us/library/ee692803.aspx SPShellAdmin Access Explained http://blogs.technet.com/b/heyscriptingguy/archive/2010/07/06/hey-scripting-guy-tell-me-about-permissions-for-using-windows-powershell-2-0-cmdlets-with-sharepoint-2010.aspx

    Read the article

  • Lessons From OpenId, Cardspace and Facebook Connect

    - by mark.wilcox
    (c) denise carbonell I think Johannes Ernst summarized pretty well what happened in a broad sense in regards to OpenId, Cardspace and Facebook Connect. However, I'm more interested in the lessons we can take away from this. First  - "Apple Lesson" - If user-centric identity is going to happen it's going to require not only technology but also a strong marketing campaign. I'm calling this the "Apple Lesson" because it's very similar to how Apple iPad saw success vs the tablet market. The iPad is not only a very good technology product but it was backed by a very good marketing plan. I know most people do not want to think about marketing here - but the fact is that nobody could really articulate why user-centric identity mattered in a way that the average person cared about. Second - "Facebook Lesson" - Facebook Connect solves a number of interesting problems that is easy for both consumer and service providers. For a consumer it's simple to log-in without any redirects. And while Facebook isn't perfect on privacy - no other major consumer-focused service on the Internet provides as much control about sharing identity information. From a developer perspective it is very easy to implement the SSO and fetch other identity information (if the user has given permission). This could only happen because a major company just decided to make a singular focus to make it happen. Third - "Developers Lesson" -  Facebook Social Graph API is by far the simplest API for accessing identity information which also is another reason why you're seeing such rapid growth in Facebook enabled Websites. By using a combination of URL and Javascript - the power a single HTML page now gives a developer writing Web applications is simply amazing. For example It doesn't get much simpler than this "http://api.facebook.com/mewilcox" for accessing identity. And while I can't yet share too much publicly about the specifics - the social graph API had a profound impact on me in designing our next generation APIs.  Posted via email from Virtual Identity Dialogue

    Read the article

  • Is it possible to transfer a domain without a "gap" in Whois privacy protection?

    - by Guest
    I currently own several domains on which I am using a Whois privacy protection service to hide my personal details. In the near future, I would like to transfer some of these domains to a different registrar. It has been many years since I last performed domain transfers, so I am no longer knowledgeable about what it involves. However, I have read from several registrars that they ask their customers to disable Whois protection before effecting a domain transfer. Since there are several websites out there that publish archived versions of Whois information (and ask handsome money for the information to be hidden, of course), I would prefer to avoid having such a "gap" in my privacy protection. I figured that these websites would fetch Whois information mainly when a query is effected through their own website. However, I have found out that at least one of these sites had a copy of the Whois information for a new domain up on their site within hours after I registered it, so they must have some other source (of course I used a Google search to find that out, not their own site). What that tells me is that the time it takes for the domain transfers to go through would be more than enough for these rogue websites to cache my information. If my new registrar offers privacy protection for domains right from the point of registration as well, is there no way to transfer the domain between the two without reverting to my default Whois information in between?

    Read the article

  • Static IP configuration causing apt-get errors

    - by JPbuntu
    I am getting errors when running apt-get update or when installing new packages. Although this only happens when the server configured for a static IP address. Changing the configuration back to DHCP and restarting networking fixes the problem, although I want a static IP. Once it is working I can change back to my static IP address and restart networking. Although this only works until I restart the server (restarting the router is ok), and then I start getting the same errors and have to switch back to DHCP. Any ideas on what could be causing this or tips on troubleshooting it? Thanks in advance. here is my static IP configuration: auto eth0 iface eth0 inet static address 192.168.2.2 netmask 255.255.255.0 gateway 192.168.2.1 The apt-get update errors go something like this: A few of these Ign http://us.archive.ubuntu.com precise-backports InRelease then a lot of these Err http://security.ubuntu.com precise-security Release.gpg Something wicked happened resolving 'security.ubuntu.com:http' (-5 - No address associated with hostname) and a lot of these W: Failed to fetch http://us.archive.ubuntu.com/ubuntu/dists/precise-backports/universe/i18n/Translation-en Something wicked happened resolving 'us.archive.ubuntu.com:http' (-5 - No address associated with hostname)

    Read the article

  • Grouping a comma separated value on common data [closed]

    - by Ankit
    I have a table with col1 id int, col2 as varchar (comma separated values) and column 3 for assigning group to them. Table looks like col1 col2 group .............................. 1 2,3,4 2 5,6 3 1,2,5 4 7,8 5 11,3 6 22,8 This is only the sample of real data, now I have to assign a group no to them in such a way that output looks like col1 col2 group .............................. 1 2,3,4 1 2 5,6 1 3 1,2,5 1 4 7,8 2 5 11,3 1 6 22,8 2 The logic for assigning group no is that every similar comma separated value of string in col2 have to be same group no as every where in col2 where '2' is there it has to be same group no but the complication is that 2,3,4 are together so they all three int value if found in any where in col2 will be assigned same group. The major part is 2,3,4 and 1,2,5 both in col2 have 2 so all int 1,2,3,4,5 have to assign same group no. Tried store procedure with match against on col2 but not getting desired result Most imp (I can't use normalization, because I can't afford to make new table from my original table which have millions of records), even normalization is not helpful in my context. This question is also on stackoverflow with bounty on this link Achieved so far:- I have set the group column auto increment and then wrote this procedure:- BEGIN declare cil1_new,col2_new,group_new int; declare done tinyint default 0; declare group_new varchar(100); declare cur1 cursor for select col1,col2,`group` from company ; DECLARE CONTINUE HANDLER FOR NOT FOUND SET done=1; open cur1; REPEAT fetch cur1 into col1_new,col2_new,group_new; update company set group=group_new where match(col2) against(concat("'",col2_new,"'")); until done end repeat; close cur1; select * from company; END This procedure is working, no syntax mistake but the problem is that I am not achieving the desired result exactly.

    Read the article

  • Chrome window controls in xubuntu 12.04

    - by Norbert
    I'm a long time ubuntu 10.04 LTS user, very used to having window controls on top left borders. New computer with new video card requires newer kernel. Unity doesn't suit and much googling turned up recommendations for xubuntu. Installed 12.04 and first aggravation is window controls at upper right a la Windows. No problem ... Settings Manager - Windows Manager - rearrange button layout Fixes everything until ... I abandoned firefox long ago because of memory leak issues and general fat. I fetch a current version of chrome (19.0.1084.52) and install. Once adblock is added and it's the default browser, everything's great. Except ... Alone among all applications, chrome will not honour the window manager's preferred button layout. Buttons are at upper right no matter what I try. Uninstall and then reinstall: nope. Trawl through /home/user/.config/google-chrome/* looking for a likely setting: nope. Search the web: nothing useful. How do I get chrome's window decoration in sync? Thanks very much for any and all help.

    Read the article

  • Can't complete dropbox installation from behind proxy in Ubuntu 11.10

    - by Mark Jones
    Problem: My PC on campus sits behind a proxy (requiring authentication) and I can't setup Dropbox. I am convinced that this is a proxy issue as I can't setup Ubuntu one either (but I don't use Ubuntu One so that is not a problem). I have looked at the Ubuntu One fix but it seems to be to modify settings explicitly related to Ubuntu One. I can install the nautilus-dropbox package (compiled from source and from .deb package from website and from software centre) but once I click OK from the "Dropbox Installation" dialog box (prompting me to download the proprietary daemon) the installation just freezes with the OK button pressed. When I look at its process in System Monitor its waiting channel is inet_wait_for_connect. I have set the following proxy directives thus far: Added mj22:**@proxy.waikato.ac.nz:80 information to network proxy settings under network in settings. Added http_host and http_port variables under gconf-editor-system-proxy Added 'host', 'authentication_password' 'authentication_user' and ticked 'user authentication' and 'use_http_proxy' under gconf-editor-system-http_proxy Added export http_proxy="http://mj22:**@proxy.waikato.ac.nz:80/" to /etc/bash.bashrc Added Acquire::http::proxy "http://mj22:**@proxy.waikato.ac.nz:80/"; to /etc/apt/apt.conf (which is what I imagine is letting Software Center retrieve packages). (where ** is my password) I have also added the equivalent ftp and https lines for the above entries. I get the internet fine and Software Centre can download packages but thats it. Related issues: The software centre can't fetch reviews (but can download packages). When trying to add an online account in Gnome 3 a dialog pop up appears with "Error getting a Request Token: Cannot connect to proxy (proxy.waikato.ac.nz)" Updates: After some time (10mins ish) Dropbox shows an error dialog box that reads: Trouble connecting to Dropbox servers. Maybe your internet connection is down, or you need to set you http_proxy environment variable. Is there a way I can see what environment variables are currently set?

    Read the article

  • 'Xojo' is the only application that I can't install

    - by Gichan
    I can't install xojo. When I click install in the software center it's not progressing. In the terminal it's stuck in : gichan02@gichan02-Latitude-D520:~$ sudo apt-get install xojo [sudo] password for gichan02: Reading package lists... Done Building dependency tree Reading state information... Done The following extra packages will be installed: xojo-bin The following NEW packages will be installed: xojo xojo-bin 0 upgraded, 2 newly installed, 0 to remove and 0 not upgraded. Need to get 209 MB/209 MB of archives. After this operation, 596 MB of additional disk space will be used. Do you want to continue? [Y/n] Y 0% [Working] then after waiting for an hour for progress it says: Failed to fetch https://private-ppa.launchpad.net/commercial-ppa-uploaders/xojo/ubuntu/pool/main/x/xojo/xojo-bin_2013.41-0ubuntu1_i386.deb Could not resolve host: private-ppa.launchpad.net So I added apt repository for 'private-ppa': deb https://ging-giana:[email protected]/commercial-ppa-uploaders/xojo/ubuntu trusty main Then when I try 'apt-get update': GPG error: https://private-ppa.launchpad.net trusty Release: The following signatures were invalid: NODATA 2 Then I noticed something the Software Sources:Other software TAB: Added by software-center; credentials stored in /etc/apt/auth.conf https://private-ppa.launchpad.net/commercial-ppa-uploaders/xojo/ubuntu So i go to the '/etc/apt/auth.conf' ,but It cannot be opened and it is not a keyserver. So i uncheck: Added by software-center; credentials stored in /etc/apt/auth.conf https://private-ppa.launchpad.net/commercial-ppa-uploaders/xojo/ubuntu GPG error was gone. But then again I found myself at the beginning of the problem.STUCK at '0% [Working]'. 'Xojo' is the only application that I can't install.Any explanation why is it like that?

    Read the article

  • Decorator not calling the decorated instance - alternative design needed

    - by Daniel Hilgarth
    Assume I have a simple interface for translating text (sample code in C#): public interface ITranslationService { string GetTranslation(string key, CultureInfo targetLanguage); // some other methods... } A first simple implementation of this interface already exists and simply goes to the database for every method call. Assuming a UI that is being translated at start up this results in one database call per control. To improve this, I want to add the following behavior: As soon as a request for one language comes in, fetch all translations from this language and cache them. All translation requests are served from the cache. I thought about implementing this new behavior as a decorator, because all other methods of that interface implemented by the decorater would simple delegate to the decorated instance. However, the implementation of GetTranslation wouldn't use GetTranslation of the decorated instance at all to get all translations of a certain language. It would fire its own query against the database. This breaks the decorator pattern, because every functionality provided by the decorated instance is simply skipped. This becomes a real problem if there are other decorators involved. My understanding is that a Decorator should be additive. In this case however, the decorator is replacing the behavior of the decorated instance. I can't really think of a nice solution for this - how would you solve it? Everything is allowed, even a complete re-design of ITranslationService itself.

    Read the article

  • session management: verifying a user's log-in state

    - by good_computer
    I am storing sessions in my database. Everytime a user logs in, I create a new row corresponding to the new session, generate a new session id and send it as a cookie to the browser. My session data looks something like this: { 'user_id': 1234 'user_name': 'Sam' ... } When a request comes, I check whether a cookie with a session id is sent. If it is, I fetch session data from my database (or memcache) corresponding to that session id. When the user logs out, I remove the session data from my database (and memcache), and delete the cookie from the user's browser too. Notice that in my session data, I don't have something like logged_in: true. This is because if I find a session record in the database (or memcache) I deduce that the user is logged in, and if there is no session record found, the user is not logged in. My question is: is this the right approach? Should I have a logged_in key in my session data? Is there any possibility that a session record may be present on the server where the corresponding user is actually NOT logged in? Are there any security implications in having or not having such a key?

    Read the article

  • [Oracle Identity Manager] 11g R2 Bundle Patch 09 is Available!

    - by mustafakaya
    Oracle Identity Manager Bundle Patch 09 is available now. You can download BP09 from here. Also,there is a important recommendation for BP08!  List of bugs fixed with BP09; Bug:12699224 : Trusted source reconciliation fails to create users with many reconciliation field mappings. Bug:14407437 : Provisioning through bulk request inserts null records into child tables. Bug:14493217 : Target resource reconciliation throws ORA-06512 error when the Descriptive field is mapped to a field that does not have a reconciliation field mapping. Bug:16044671 : User form customization fails if a UDF contains invalid character. Bug:16545968 : Modifying any attribute on a service account changes the account type as a primary account. Bug:16562633 : Oracle Identity Manager throws javax.el.elexceptions while viewing profile under direct report. Bug:16662834 : User not reprovisoned after user is deleted and created in the target with the same orclguid. Bug:16662905 : If an LOV field is required on an Application Instance form, no validation is enforced on the LOV field although it is required. Bug:16701873 : The Members tab of a role displays only enabled users and does not display disabled users. Bug:16862846 : When a notification is being sent, the mail ID in the Reply To field is set as the recipient's mail ID instead of the sender's mail ID. Bug:16824062 : When you use API to fetch or delete child data from an account, the child data row value is null. Therefore, child data is not returned. Bug:16912736 : There is a performance issue when the provisioned application instance details is opened for a user.

    Read the article

  • apt-cacher ng / upgrade fails from client

    - by todayis23
    I'm running apt-cacher ng on an Ubuntu Hardy server and try to upgrade the packages on a Natty client (which was initially a Maverick). I didn't do anything on the server. On the client I tried two setups. I configure APT to use a http-proxy. On the client I did a "apt-get update" which worked fine, but very slowly. In the acng-report.html I see an entry, which seems to be correct. After verifying Install these packages without verification [y/N]? y "apt-get upgrade" failes with the message: Err http://archive.ubuntu.com/ubuntu/ natty-updates/main libnux-0.9-common all 0.9.48-0ubuntu1.1 503 Name or service not known The GUI update manager fails as well with the message, that untrusted packages will be installed. I edit sources.list and add the server in the correct format to all sources. "apt-get update" is very slowly... and I get a lot of errors like this: W: Failed to fetch http://[::ffff:10.10.10.10]:3142/archive.ubuntu.com/ubuntu/dists/natty/main/binary-i386/Packages 403 Forbidden file type or location After that "apt-get upgrade" says: 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. What could be wrong? Is it possible to use apt-cacher ng on an older system for upgrading newer systems? Thank you in advance!

    Read the article

  • Determine All SQL Server Table Sizes

    Im doing some work to migrate and optimize a large-ish (40GB) SQL Server database at the moment.  Moving such a database between data centers over the Internet is not without its challenges.  In my case, virtually all of the size of the database is the result of one table, which has over 200M rows of data.  To determine the size of this table on disk, you can run the sp_TableSize stored procedure, like so: EXEC sp_spaceused lq_ActivityLog This results in the following: Of course this is only showing one table if you have a lot of tables and need to know which ones are taking up the most space, it would be nice if you could run a query to list all of the tables, perhaps ordered by the space theyre taking up.  Thanks to Mitchel Sellers (and Gregg Starks CURSOR template) and a tiny bit of my own edits, now you can!  Create the stored procedure below and call it to see a listing of all user tables in your database, ordered by their reserved space. -- Lists Space Used for all user tablesCREATE PROCEDURE GetAllTableSizesASDECLARE @TableName VARCHAR(100)DECLARE tableCursor CURSOR FORWARD_ONLYFOR select [name]from dbo.sysobjects where OBJECTPROPERTY(id, N'IsUserTable') = 1FOR READ ONLYCREATE TABLE #TempTable( tableName varchar(100), numberofRows varchar(100), reservedSize varchar(50), dataSize varchar(50), indexSize varchar(50), unusedSize varchar(50))OPEN tableCursorWHILE (1=1)BEGIN FETCH NEXT FROM tableCursor INTO @TableName IF(@@FETCH_STATUS<>0) BREAK; INSERT #TempTable EXEC sp_spaceused @TableNameENDCLOSE tableCursorDEALLOCATE tableCursorUPDATE #TempTableSET reservedSize = REPLACE(reservedSize, ' KB', '')SELECT tableName 'Table Name',numberofRows 'Total Rows',reservedSize 'Reserved KB',dataSize 'Data Size',indexSize 'Index Size',unusedSize 'Unused Size'FROM #TempTableORDER BY CONVERT(bigint,reservedSize) DESCDROP TABLE #TempTableGO Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Does the google crawler really guess URL patterns and index pages that were never linked against?

    - by Dominik
    I'm experiencing problems with indexed pages which were (probably) never linked to. Here's the setup: Data-Server: Application with RESTful interface which provides the data Website A: Provides the data of (1) at http://website-a.example.com/?id=RESOURCE_ID Website B: Provides the data of (1) at http://website-b.example.com/?id=OTHER_RESOURCE_ID So the whole, non-private data is stored on (1) and the websites (2) and (3) can fetch and display this data, which is a representation of the data with additional cross-linking between those. In fact, the URL /?id=1 of website-a points to the same resource as /?id=1 of website-b. However, the resource id:1 is useless at website-b. Unfortunately, the google index for website-b now contains several links of resources belonging to website-a and vice versa. I "heard" that the google crawler tries to determine the URL-pattern (which makes sense for deciding which page should go into the index and which not) and furthermore guesses other URLs by trying different values (like "I know that id 1 exists, let's try 2, 3, 4, ..."). Is there any evidence that the google crawler really behaves that way (which I doubt). My guess is that the google crawler submitted a HTML-Form and somehow got links to those unwanted resources. I found some similar posted questions about that, including "Google webmaster central: indexing and posting false pages" [link removed] however, none of those pages give an evidence.

    Read the article

  • New site not appearing in index after change of address, no feedback from google webmaster tools

    - by Duffy
    Our change of address seems to not be taking effect. Here's the story so far: We're a web company and our product is called The New Hive. Our site used to be at thenewhive.com, but we decided to switch to newhive.com (drop the "the", it's cleaner). So the timeline of what I've tried, starting on July 29th: used 301 redirects for all pages (e.g. thenewhive.com/tag/art = newhive.com/tag/art) At this point we noticed that we had disappeared from search results when searching "The New Hive", the front page used to be all links to our site plus a couple news articles about the company. So on August 5th I: verified new domain in webmaster tools (old domain was already verified) submitted a change of address request on August 5th with Webmaster Tools / Configuration / Change of Address Then after another week, on August 13th I did this: Went to Webmaster Tools / Health / Fetch as google fetched our homepage and a couple sub pages, all successfully clicked "Submit to Index" for homepage As of today (August 23rd) we're still not showing up in the index. We're getting no warnings or feedback of any kind from the dashboard so I'm inclined to think something's broken with the dashboard rather than that something's wrong with our site from an SEO perspective. From the dashboard: No new messages or recent critical issues. Crawl Errors: No data available. From Health - Index Status: Total indexed 0 Ever crawled 42,490 Not selected 12 Blocked by robots 0 I'm really at a loss here, any help would be appreciated.

    Read the article

  • SEO for maps-based websites that require user interaction

    - by j0nes
    I have a website that basically shows a lot of locations worldwide on a Google Maps like interface. The map itself is built using the Leaflet library and Open Street Map tiles. In the map, I show markers at each location I have. There is a popup window when I click on a marker that shows additional information and contains links to "detail" pages for this location. I fetch the location data for the viewpoint from an AJAX call from my server, so the additional information is not available in the HTML page itself. The detail pages are the pages my users are interested in. My normal users load the map, search the location they are interested in, click on a marker and click on a link in the popup window. However for search engines, this might look different. As this navigation pattern relies on user interaction, I think they might not be able to find the details page. My questions: Are search engines able to follow a navigation path like outlined above? How can I improve the navigation for search engines? (For example showing textual links below the map, sitemaps...) How important are internal links for SEO?

    Read the article

  • Code execution time out occationally

    - by Athul k Surendran
    I am working on an e-commerce website. There is a case where I need to fetch the whole data in database through third-party API and send it to an indexing engine. This third-party API has many functions like getproducts, getproductprice, etc., and each of that functions will return the data in XML format. From there I will take charge, I will use various API calls and will handle the XML data with XSLT. And will write to a CSV file. This file will be uploaded to an Indexing engine. Right now I have details of 8000 products to feed the engine, and almost all time the this process takes about 15 min to complete, and sometimes fails. I can't find a better solution for this. I am thinking about handling the XML data in C# itself and get rid of XSLT. As I think, XSLT is far slower than C#. Is it a good Idea? Or what else I can do to solve this issue?

    Read the article

  • Handling deleted users - separate or same table?

    - by Alan Beats
    The scenario is that I've got an expanding set of users, and as time goes by, users will cancel their accounts which we currently mark as 'deleted' (with a flag) in the same table. If users with the same email address (that's how users log in) wish to create a new account, they can signup again, but a NEW account is created. (We have unique ids for every account, so email addresses can be duplicated amongst live and deleted ones). What I've noticed is that all across our system, in the normal course of things we constantly query the users table checking the user is not deleted, whereas what I'm thinking is that we dont need to do that at all...! [Clarification1: by 'constantly querying', I meant that we have queries which are like: '... FROM users WHERE isdeleted="0" AND ...'. For example, we may need to fetch all users registered for all meetings on a particular date, so in THAT query, we also have FROM users WHERE isdeleted="0" - does this make my point clearer?] (1) continue keeping deleted users in the 'main' users table (2) keep deleted users in a separate table (mostly required for historical book-keeping) What are the pros and cons of either approach?

    Read the article

  • What Pattern will solve this - fetching dependent record from database

    - by tunmise fasipe
    I have these classes class Match { int MatchID, int TeamID, //used to reference Team ... other fields } Note: Match actually have 2 teams which means 2 TeamID class Team { int TeamID, string TeamName } In my view I need to display List<Match> showing the TeamName. So I added another field class Match { int MatchID, int TeamID, //used to reference Team ... other fields string TeamName; } I can now do Match m = getMatch(id); m.TeamName = getTeamName(m.TeamId); //get name from database But for a List<Match>, getTeamName(TeamId) will go to the database to fetch TeamName for each TeamID. For a page of 10 Matches per page, that could be (10x2Teams)=20 trip to database. To avoid this, I had the idea of loading everything once, store it in memory and only lookup the TeamName in memory. This made me have a rethink that what if the records are 5000 or more. What pattern is used to solve this and how?

    Read the article

  • Anemic Domain Model, Business Logic and DataMapper (PHP)

    - by sunwukung
    I've implemented a rudimentary ORM layer based on DataMapper (I don't want to use a full blown ORM like Propel/Doctrine - for anything beyond simple fetch/save ops I prefer to access the data directly layer using a SQL abstraction layer). Following the DataMapper pattern, I've endeavoured to keep all persistence operations in the Mapper - including the location of related entities. My Entities have access to their Mapper, although I try not to call Mapper logic from the Entity interface (although this would be simple enough). The result is: // get a mapper and produce an entity $ProductMapper = $di->get('product_mapper'); $Product = $ProductMapper->find('[email protected]','email'); //.. mutaute some values.. save $ProductMapper->save($Product) // uses __get to trigger relation acquisition $Manufacturer = $Product->manufacturer; I've read some articles regarding the concept of an Anemic Domain model, i.e. a Model that does not contain any "business logic". When demonstrating the sort of business logic ideally suited to a Domain Model, however, acquiring related data items is a common example. Therefore I wanted to ask this question: Is persistence logic appropriate in Domain Model objects?

    Read the article

  • How far should one take e-mail address validation?

    - by Mike Tomasello
    I'm wondering how far people should take the validation of e-mail address. My field is primarily web-development, but this applies anywhere. I've seen a few approaches: simply checking if there is an "@" present, which is dead simply but of course not that reliable. a more complex regex test for standard e-mail formats a full regex against RFC 2822 - the problem with this is that often an e-mail address might be valid but it is probably not what the user meant DNS validation SMTP validation As many people might know (but many don't), e-mail addresses can have a lot of strange variation that most people don't usually consider (see RFC 2822 3.4.1), but you have to think about the goals of your validation: are you simply trying to ensure that an e-mail address can be sent to an address, or that it is what the user probably meant to put in (which is unlikely in a lot of the more obscure cases of otherwise 'valid' addresses). An option I've considered is simply giving a warning with a more esoteric address but still allowing the request to go through, but this does add more complexity to a form and most users are likely to be confused. While DNS validation / SMTP validation seem like no-brainers, I foresee problems where the DNS server/SMTP server is temporarily down and a user is unable to register somewhere, or the user's SMTP server doesn't support the required features. How might some experienced developers out here handle this? Are there any other approaches than the ones I've listed? Edit: I completely forgot the most obvious of all, sending a confirmation e-mail! Thanks to answerers for pointing that one out. Yes, this one is pretty foolproof, but it does require extra hassle on the part of everyone involved. The user has to fetch some e-mail, and the developer needs to remember user data before they're even confirmed as valid.

    Read the article

  • Caching by in-memory dictionaries. Are we doing it all wrong?

    - by user73983
    This approach is pretty much the accepted way to do anything in our company. A simple example : when a piece of data for a customer is requested from a service, we fetch all the data for that customer(relevant part to the service) and save it in a in-memory dictionary then serve it from there on following requests(we run singleton services). Any update goes to DB, then updates the in memory dictionary. It seems all simple and harmless but as we implement more complicated business rules the cache gets out of sync and we have to deal with hard to find bugs. Sometimes we defer writing to database, keeping new data in cache till then. There are cases when we store millions of rows in memory because the table has many relations to other tables and we need to show aggregate data quickly. All this cache handling is a big part of our codebase and I sense this is not the right way to do it. All of this juggling adds too much noise to the code and it makes it hard to understand the actual business logic. However I don't think we can serve data in a reasonable amount of time if we have to hit the database every time. I am unhappy about the current situation but I don't have a better alternative. My only solution would be to use NHibernate 2nd level cache but I have nearly no experience with it. I know many campanies use Redis or MemCached heavily to gain performance but I have no idea how I would integrate them into our system. I also don't know if they can perform better than in-memory data structures and queries. Are there any alternative approaches that I should look into?

    Read the article

  • Identifying elements from data feeds generated by affiliate sites

    - by SPI
    I am working with data feeds from affiliate sites. The basic idea is to provide an interface where the user can paste a link to an XML datafeed (these are huge btw, around 60 mb) that would then be streamed, parsed into small chunks, and mined for the required data which would then be stored in the database. The problem is that different affiliate sites have different Schemas for their XML's. It is a little hard mapping the elements in an XML to your database attributes when you don't actually know which element contains what. My Solution: Use XPath to traverse through the first set of parent and it's descendent's, fetch the elements as well as the data and and ask the user to map this data to the attributes in the database by selecting from a set of radio buttons that represent the attributes from the database. This will be done just once for each new Feed, once the system know's what's what it will automatically upload the data from the XML to the database. Does this sound viable? Is there a better solution? I realize this leaves an uncomfortable opening for human error.. Thanks.

    Read the article

  • MVC design patterns

    - by insane-36
    I have an application and it does not use a very good structure. However it seems to me that I have tried to stick to mvc design pattern but a senior engineer claims that I have no design patterns and code are mesh. How I have structured the code : I have couple of nsmanagedobject model classes which represents model in my case and a reskit library which encapsulates the nsurlconnection and url request. I fetch the request from the view controller itself and then when the request get completed I create predicate and then populate it in tableview. Wherever I need custom view either I create it in nib or create in a custom subclass of UIView. I have use delegation pattern and notification to communication to view controller, views and block callback with restkit. But, the senior engineer is very new to ios. He has been doing it for 2 months now but he is a good java programmer. So, what is mvc pattern ? Is core data model not working as a model objects, view controller as controller and views. I dont seem to find any other places or any other cases to create my own model object since the most of the models are used as NSManagedObject subclass.

    Read the article

< Previous Page | 54 55 56 57 58 59 60 61 62 63 64 65  | Next Page >