Search Results

Search found 18329 results on 734 pages for 'interpret order'.

Page 66/734 | < Previous Page | 62 63 64 65 66 67 68 69 70 71 72 73  | Next Page >

  • Extracting [] elements from form collection - mvc - should use icollection but have mix of types

    - by bergin
    hi there. have looked at Phil Haacks project on books at http://haacked.com/archive/2008/10/23/model-binding-to-a-list.aspx which has been useful, but I have a mix of data types. I use a modelview so that i can have a mix of objects, in this case: Order (ie order.id, order.date etc), Customer, SoilSamplingOrder and a list of SoilSamplingSubJobs which is like this [0].id, [0].field, [1].id, [1].field etc Perhaps I should be using ICollection instead of List? I had problems getting UpdateModel to work so I used an extract from collection method. the first 4 method calls : orderRepository.FindOrder(id); etc give the model the original to be edited. but after this point i'm a little lost in how to update the subjobs. I hope i have delineated enough to make sense of the problem. [HttpPost] public ActionResult Edit(int id, FormCollection collection) { Order order = orderRepository.FindOrder(id); Customer cust = orderRepository.FindCustomer(order.customer_id); IList<SoilSamplingSubJob> sssj = orderRepository.FindSubOrders(id); SoilSamplingOrder sso = orderRepository.FindSoilSampleOrder(id); try { UpdateModel(order, collection.ToValueProvider()); UpdateModel(cust, collection.ToValueProvider()); UpdateModel(sso, collection.ToValueProvider()); IList<SoilSamplingSubJob> sssjs = orderRepository.extractSSSJ(collection); foreach (var sj in sssjs) UpdateModel(sso, collection.ToValueProvider()); orderRepository.Save(); return RedirectToAction("Details", new { id=order.order_id}); } catch { return View(); } }

    Read the article

  • Rails 2.3 session

    - by Sam Kong
    Hi, I am developing a rails 2.3.2 app. I need to keep session_id for an order record, retrieve it and finally delete the session_id when the order is completed. It worked when I used cookies as session store but it doesn't for active_record store. (I restarted my browser, so no cache issue.) I know rails 2.3 implements lazy session load. I read some info about it but am still confused. Can somebody clarify how I use session_id for such a case? What I am doing is... A user make an order going through several pages. There is no sign-up, neither login. So I keep session_id in the order record so that no other user can access the order. @order = Order.last :conditions = {:id = params[:id], :session_id = session[:session_id] } When the order is finished, I set nil to session_id column. How would you implement such a case in lazy session(and active_record store) environment? Thanks. Sam

    Read the article

  • Database Design Question regaurding duplicate information.

    - by galford13x
    I have a database that contains a history of product sales. For example the following table CREATE TABLE SalesHistoryTable ( OrderID, // Order Number Unique to all orders ProductID, // Product ID can be used as a Key to look up product info in another table Price, // Price of the product per unit at the time of the order Quantity, // quantity of the product for the order Total, // total cost of the order for the product. (Price * Quantity) Date, // Date of the order StoreID, // The store that created the Order PRIMARY KEY(OrderID)); The table will eventually have millions of transactions. From this, profiles can be created for products in different geographical regions (based on the StoreID). Creating these profiles can be very time consuming as a database query. For example. SELECT ProductID, StoreID, SUM(Total) AS Total, SUM(Quantity) QTY, SUM(Total)/SUM(Quantity) AS AvgPrice FROM SalesHistoryTable GROUP BY ProductID, StoreID; The above query could be used to get the Information based on products for any particular store. You could then determine which store has sold the most, has made the most money, and on average sells for the most/least. This would be very costly to use as a normal query run anytime. What are some design descisions in order to allow these types of queries to run faster assuming storage size isn’t an issue. For example, I could create another Table with duplicate information. Store ID (Key), Product ID, TotalCost, QTY, AvgPrice And provide a trigger so that when a new order is received, the entry for that store is updated in a new table. The cost for the update is almost nothing. What should be considered when given the above scenario?

    Read the article

  • Impact of ordering of correlated subqueries within a projection

    - by Michael Petito
    I'm noticing something a bit unexpected with how SQL Server (SQL Server 2008 in this case) treats correlated subqueries within a select statement. My assumption was that a query plan should not be affected by the mere order in which subqueries (or columns, for that matter) are written within the projection clause of the select statement. However, this does not appear to be the case. Consider the following two queries, which are identical except for the ordering of the subqueries within the CTE: --query 1: subquery for Color is second WITH vw AS ( SELECT p.[ID], (SELECT TOP(1) [FirstName] FROM [Preference] WHERE p.ID = ID AND [FirstName] IS NOT NULL ORDER BY [LastModified] DESC) [FirstName], (SELECT TOP(1) [Color] FROM [Preference] WHERE p.ID = ID AND [Color] IS NOT NULL ORDER BY [LastModified] DESC) [Color] FROM Person p ) SELECT ID, Color, FirstName FROM vw WHERE Color = 'Gray'; --query 2: subquery for Color is first WITH vw AS ( SELECT p.[ID], (SELECT TOP(1) [Color] FROM [Preference] WHERE p.ID = ID AND [Color] IS NOT NULL ORDER BY [LastModified] DESC) [Color], (SELECT TOP(1) [FirstName] FROM [Preference] WHERE p.ID = ID AND [FirstName] IS NOT NULL ORDER BY [LastModified] DESC) [FirstName] FROM Person p ) SELECT ID, Color, FirstName FROM vw WHERE Color = 'Gray'; If you look at the two query plans, you'll see that an outer join is used for each subquery and that the order of the joins is the same as the order the subqueries are written. There is a filter applied to the result of the outer join for color, to filter out rows where the color is not 'Gray'. (It's odd to me that SQL would use an outer join for the color subquery since I have a non-null constraint on the result of the color subquery, but OK.) Most of the rows are removed by the color filter. The result is that query 2 is significantly cheaper than query 1 because fewer rows are involved with the second join. All reasons for constructing such a statement aside, is this an expected behavior? Shouldn't SQL server opt to move the filter as early as possible in the query plan, regardless of the order the subqueries are written?

    Read the article

  • Linq duplicate removal with a twist

    - by Danthar
    I got a list that contains al the status items of each order. The problem that i have is that i need to remove all the items of which the status - logdate combination is not the highest. e.g var inputs = new List<StatusItem>(); //note that the 3th id is simply a modifier that adds that amount of secs //to the current datetime, to make testing easier inputs.Add(new StatusItem(123, 30, 1)); inputs.Add(new StatusItem(123, 40, 2)); inputs.Add(new StatusItem(123, 50, 3)); inputs.Add(new StatusItem(123, 40, 4)); inputs.Add(new StatusItem(123, 50, 5)); inputs.Add(new StatusItem(100, 20, 6)); inputs.Add(new StatusItem(100, 30, 7)); inputs.Add(new StatusItem(100, 20, 8)); inputs.Add(new StatusItem(100, 30, 9)); inputs.Add(new StatusItem(100, 40, 10)); inputs.Add(new StatusItem(100, 50, 11)); inputs.Add(new StatusItem(100, 40, 12)); var l = from i in inputs group i by i.internalId into cg select from s in cg group s by s.statusId into sg select sg.OrderByDescending(n => n.date).First() ; This creates a list that returnes me the following: order 123 status 30 date 4/9/2010 6:44:21 PM order 123 status 40 date 4/9/2010 6:44:24 PM order 123 status 50 date 4/9/2010 6:44:25 PM order 100 status 20 date 4/9/2010 6:44:28 PM order 100 status 30 date 4/9/2010 6:44:29 PM order 100 status 40 date 4/9/2010 6:44:32 PM order 100 status 50 date 4/9/2010 6:44:31 PM This is ALMOST correct. However that last line which has status 50 needs to be filtered out as well because it was overruled by status 40 in the historylist. U can tell by the fact that its date is lower then the "last" status-item with the status 40. I was hoping someone could give me some pointers because im stuck.

    Read the article

  • C++ : Avoid lot of boolean variable for multiple verification conditions in trading app

    - by Naveen
    Hi i am a junior dev in trading app... we have a order refresh verification unit. It has to verify order confirmation from exchange. We send a bunch of different request in bulk ( NEW, MODIFY, CANCEL ) to exchange... Verification has to happen for max N times with each T intervals for all orders. if verification successful for all the order before N retry then fine.. otherwise we need to indicate as verification unsuccessfull. i hv done a basic coding done in very urgent like below for( N times ) { for_each ( sent_request_order ) // SENT { 1) get all the refreshed order from DB or shared mem i.e REFRESHED 2) find current sent order in REFRESHED if( not_found ) not refreshed from exchange, continue to next order if( found ) case NEW : //check for new status, mark verification done case MODIFY : //check for modified status.. //if not mark pending, go to next order, //revisit the same after T time case CANCEL : //check for cancelled status.. //if not mark pending, go to next order, //revisit the same after T time } if( all_verified ) exit from verification. wait ( T sec ) } order_verification_pending, order_verification_done, order_visited, order_not_visited, all_verified, all_not_verified ... lot of boolean flags used for indication.. is there any better approach for doing this.... splitting responsibilities across the classes......???? i know this is not a general question.... but still flags are making me tidious to handle...

    Read the article

  • ASP.NET Bind to IEnumerable

    - by JFoulkes
    Hi, I'm passing a the type IEnumerable to my view, and for each item I output a html.textbox to enter the details into. When I post this back to my controller, the collection is empty and I can't see why. public class Item { public Order Order { get; set; } public string Title { get; set; } public double Price { get; set; } } My Get method: public ActionResult AddItems(Order order) { Item itemOne = new Item { Order = order }; Item itemTwo = new Item { Order = order, }; IList<Item> items = new List<Item> { itemOne, itemTwo }; return View(items); } The View: <% int i = 0; foreach (var item in Model) { %> <p> <label for="Title">Item Title:</label> <%= Html.TextBox("items[" + i + "].Title") %> <%= Html.ValidationMessage("items[" + i + "].Title", "*")%> </p> <p> <label for="Price">Item Price:</label> <%= Html.TextBox("items[" + i + "].Price") %> <%= Html.ValidationMessage("items[" + i + "].Price", "*")%> </p> <% i++; } %> The POST method: [AcceptVerbs(HttpVerbs.Post)] public ActionResult AddItems(IEnumerable<Item> items) { try { return RedirectToAction("Index"); } catch { return View(); } } At the moment i just have a breakpoint on the post method to check what i'm gettin back.

    Read the article

  • Using hashing to group similar records

    - by Neil Dobson
    I work for a fulfillment company and we have to pack and ship many orders from our warehouse to customers. To improve efficiency we would like to group identical orders and pack these in the most optimum way. By identical I mean having the same number of order lines containing the same SKUs and same order quantities. To achieve this I was thinking about hashing each order. We can then group by hash to quickly see which orders are the same. We are moving from an Access database to a PostgreSQL database and we have .NET based systems for data loading and general order processing systems, so we can either do the hashing during the data loading or hand this task over to the DB. My question firstly is should the hashing be managed by DB, possibly using triggers, or should the hash be created on-the-fly using a view or something? And secondly would it be best to calculate a hash for each order line and then to combine these to find an order-level hash for grouping, or should I just use a trigger for all CRUD operations on the order lines table which re-calculates a single hash for the entire order and store the value in the orders table? TIA

    Read the article

  • I have to manually change the DNS suffix order every time I connect to VPN. Can I change this permanently or fix the problem somehow?

    - by CarlB
    Sorry in advance but I'm a programmer, not a network engineer, so I'm a noob at this stuff. Anyway, when I am not connected to VPN from my work PC at home, I have the following DNS suffixes listed (real domain names substituted): enterprise.org network.org company.com us.enterprise.org After connecting to VPN, one more DNS suffix is added to the very top of the list: problem-domain.com At this point, most network functions that I can normally perform when actually connected to the LAN in the office are unusable. I get error messages about the network paths not being found and what-not. Anyway, I played around with the suffixes and realized that if I just moved problem-domain.com down one spot to the second in the list, all the problems went away. Unfortunately, it returns to the top spot every time I reconnect, and I tend to get disconnected frequently. Is there something else I can do about this or should I just contact the IT department? I've had this problem before and they weren't able to resolve it but I suppose it would be worth trying again if I could get a different person on the job. What I don't understand is that I thought it didn't matter what order the suffixes were in? Isn't Windows supposed to go through each suffix until it finds a match (or has gone through all the suffixes)? Why is it quitting after the first one? Thanks in advance.

    Read the article

  • localhost/phpmyadmin pulls blank page

    - by Atul Modi
    When I tried configuring local machine as a Internet Gateway with website development capabilities over it and I installed all required software into it. I also had disable the selinux into it. But PROBLEM is when I do http://localhost/phpMyAdmin or all lower case than the page shows it as a blank page. I am pasting code from httpd.conf file into this as well as from phpMyAdmin.conf file I am using Fedora 16 for this. httpd.conf ServerTokens OS ServerRoot "/etc/httpd" PidFile run/httpd.pid Timeout 60 KeepAlive Off MaxKeepAliveRequests 100 KeepAliveTimeout 5 StartServers 8 MinSpareServers 5 MaxSpareServers 20 ServerLimit 256 MaxClients 256 MaxRequestsPerChild 4000 StartServers 4 MaxClients 300 MinSpareThreads 25 MaxSpareThreads 75 ThreadsPerChild 25 MaxRequestsPerChild 0 Listen 80 LoadModule auth_basic_module modules/mod_auth_basic.so LoadModule auth_digest_module modules/mod_auth_digest.so LoadModule authn_file_module modules/mod_authn_file.so LoadModule authn_alias_module modules/mod_authn_alias.so LoadModule authn_anon_module modules/mod_authn_anon.so LoadModule authn_dbm_module modules/mod_authn_dbm.so LoadModule authn_default_module modules/mod_authn_default.so LoadModule authz_host_module modules/mod_authz_host.so LoadModule authz_user_module modules/mod_authz_user.so LoadModule authz_owner_module modules/mod_authz_owner.so LoadModule authz_groupfile_module modules/mod_authz_groupfile.so LoadModule authz_dbm_module modules/mod_authz_dbm.so LoadModule authz_default_module modules/mod_authz_default.so LoadModule authn_dbd_module modules/mod_authn_dbd.so LoadModule dbd_module modules/mod_dbd.so LoadModule ldap_module modules/mod_ldap.so LoadModule authnz_ldap_module modules/mod_authnz_ldap.so LoadModule include_module modules/mod_include.so LoadModule log_config_module modules/mod_log_config.so LoadModule logio_module modules/mod_logio.so LoadModule env_module modules/mod_env.so LoadModule ext_filter_module modules/mod_ext_filter.so LoadModule mime_magic_module modules/mod_mime_magic.so LoadModule expires_module modules/mod_expires.so LoadModule deflate_module modules/mod_deflate.so LoadModule headers_module modules/mod_headers.so LoadModule usertrack_module modules/mod_usertrack.so LoadModule setenvif_module modules/mod_setenvif.so LoadModule mime_module modules/mod_mime.so LoadModule dav_module modules/mod_dav.so LoadModule status_module modules/mod_status.so LoadModule autoindex_module modules/mod_autoindex.so LoadModule info_module modules/mod_info.so LoadModule dav_fs_module modules/mod_dav_fs.so LoadModule vhost_alias_module modules/mod_vhost_alias.so LoadModule negotiation_module modules/mod_negotiation.so LoadModule dir_module modules/mod_dir.so LoadModule actions_module modules/mod_actions.so LoadModule speling_module modules/mod_speling.so LoadModule userdir_module modules/mod_userdir.so LoadModule alias_module modules/mod_alias.so LoadModule substitute_module modules/mod_substitute.so LoadModule rewrite_module modules/mod_rewrite.so LoadModule proxy_module modules/mod_proxy.so LoadModule proxy_balancer_module modules/mod_proxy_balancer.so LoadModule proxy_ftp_module modules/mod_proxy_ftp.so LoadModule proxy_http_module modules/mod_proxy_http.so LoadModule proxy_ajp_module modules/mod_proxy_ajp.so LoadModule proxy_connect_module modules/mod_proxy_connect.so LoadModule cache_module modules/mod_cache.so LoadModule suexec_module modules/mod_suexec.so LoadModule disk_cache_module modules/mod_disk_cache.so LoadModule cgi_module modules/mod_cgi.so LoadModule version_module modules/mod_version.so Include conf.d/*.conf User apache Group apache ServerAdmin root@localhost UseCanonicalName Off DocumentRoot "/var/www/html" Options FollowSymLinks AllowOverride None Options Indexes FollowSymLinks AllowOverride None Order allow,deny Allow from all UserDir disabled DirectoryIndex index.html index.htm index.php AccessFileName .htaccess Order allow,deny Deny from all Satisfy All TypesConfig /etc/mime.types DefaultType text/plain MIMEMagicFile conf/magic HostnameLookups Off ErrorLog logs/error_log LogLevel warn LogFormat "%h %l %u %t \"%r\" %s %b \"%{Referer}i\" \"%{User-Agent}i\"" combined LogFormat "%h %l %u %t \"%r\" %s %b" common LogFormat "%{Referer}i - %U" referer LogFormat "%{User-agent}i" agent CustomLog logs/access_log combined ServerSignature On Alias /icons/ "/var/www/icons/" Options Indexes MultiViews FollowSymLinks AllowOverride None Order allow,deny Allow from all # Location of the WebDAV lock database. DAVLockDB /var/lib/dav/lockdb ScriptAlias /cgi-bin/ "/var/www/cgi-bin/" AllowOverride None Options None Order allow,deny Allow from all IndexOptions FancyIndexing VersionSort NameWidth=* HTMLTable Charset=UTF-8 AddIconByEncoding (CMP,/icons/compressed.gif) x-compress x-gzip AddIconByType (TXT,/icons/text.gif) text/* AddIconByType (IMG,/icons/image2.gif) image/* AddIconByType (SND,/icons/sound2.gif) audio/* AddIconByType (VID,/icons/movie.gif) video/* AddIcon /icons/binary.gif .bin .exe AddIcon /icons/binhex.gif .hqx AddIcon /icons/tar.gif .tar AddIcon /icons/world2.gif .wrl .wrl.gz .vrml .vrm .iv AddIcon /icons/compressed.gif .Z .z .tgz .gz .zip AddIcon /icons/a.gif .ps .ai .eps AddIcon /icons/layout.gif .html .shtml .htm .pdf AddIcon /icons/text.gif .txt AddIcon /icons/c.gif .c AddIcon /icons/p.gif .pl .py AddIcon /icons/f.gif .for AddIcon /icons/dvi.gif .dvi AddIcon /icons/uuencoded.gif .uu AddIcon /icons/script.gif .conf .sh .shar .csh .ksh .tcl AddIcon /icons/tex.gif .tex AddIcon /icons/bomb.gif core AddIcon /icons/back.gif .. AddIcon /icons/hand.right.gif README AddIcon /icons/folder.gif ^^DIRECTORY^^ AddIcon /icons/blank.gif ^^BLANKICON^^ DefaultIcon /icons/unknown.gif ReadmeName README.html HeaderName HEADER.html IndexIgnore .??* *~ # HEADER README* RCS CVS *,v *,t AddLanguage ca .ca AddLanguage cs .cz .cs AddLanguage da .dk AddLanguage de .de AddLanguage el .el AddLanguage en .en AddLanguage eo .eo AddLanguage es .es AddLanguage et .et AddLanguage fr .fr AddLanguage he .he AddLanguage hr .hr AddLanguage it .it AddLanguage ja .ja AddLanguage ko .ko AddLanguage ltz .ltz AddLanguage nl .nl AddLanguage nn .nn AddLanguage no .no AddLanguage pl .po AddLanguage pt .pt AddLanguage pt-BR .pt-br AddLanguage ru .ru AddLanguage sv .sv AddLanguage zh-CN .zh-cn AddLanguage zh-TW .zh-tw LanguagePriority en ca cs da de el eo es et fr he hr it ja ko ltz nl nn no pl pt pt-BR ru sv zh-CN zh-TW ForceLanguagePriority Prefer Fallback AddDefaultCharset UTF-8 AddType application/x-tar .tgz AddType application/x-httpd-php .php AddType application/x-httpd-php .xml AddHandler application/x-httpd-php .xml AddType application/x-compress .Z AddType application/x-gzip .gz .tgz AddType application/x-x509-ca-cert .crt AddType application/x-pkcs7-crl .crl AddHandler type-map var AddType text/html .shtml AddOutputFilter INCLUDES .shtml Alias /error/ "/var/www/error/" AllowOverride None Options IncludesNoExec AddOutputFilter Includes html AddHandler type-map var Order allow,deny Allow from all LanguagePriority en ForceLanguagePriority Prefer Fallback ErrorDocument 400 /error/HTTP_BAD_REQUEST.html.var ErrorDocument 401 /error/HTTP_UNAUTHORIZED.html.var ErrorDocument 403 /error/HTTP_FORBIDDEN.html.var ErrorDocument 404 /error/HTTP_NOT_FOUND.html.var ErrorDocument 405 /error/HTTP_METHOD_NOT_ALLOWED.html.var ErrorDocument 408 /error/HTTP_REQUEST_TIME_OUT.html.var ErrorDocument 500 /error/HTTP_INTERNAL_SERVER_ERROR.html.var ErrorDocument 503 /error/HTTP_SERVICE_UNAVAILABLE.html.var BrowserMatch "Mozilla/2" nokeepalive BrowserMatch "MSIE 4.0b2;" nokeepalive downgrade-1.0 force-response-1.0 BrowserMatch "RealPlayer 4.0" force-response-1.0 BrowserMatch "Java/1.0" force-response-1.0 BrowserMatch "JDK/1.0" force-response-1.0 BrowserMatch "Microsoft Data Access Internet Publishing Provider" redirect-carefully BrowserMatch "MS FrontPage" redirect-carefully BrowserMatch "^WebDrive" redirect-carefully BrowserMatch "^WebDAVFS/1.[0123]" redirect-carefully BrowserMatch "^gnome-vfs/1.0" redirect-carefully BrowserMatch "^XML Spy" redirect-carefully BrowserMatch "^Dreamweaver-WebDAV-SCM1" redirect-carefully Order allow,deny Allow from all # phpMyAdmin.conf Alias /phpMyAdmin /usr/share/phpMyAdmin Alias /phpmyadmin /usr/share/phpMyAdmin Order Allow,Deny Allow from All Allow from 127.0.0.1 Allow from ::1 Order Allow,Deny Allow from All Allow from 127.0.0.1 Allow from ::1 Order Deny,Allow Deny from All Allow from None Order Deny,Allow Deny from All Allow from None Order Deny,Allow Deny from All Allow from None Can anyone help into this area please. Urgent reply will be appreciatable because i am struggling since one and half month for this matter. thank you, Atul

    Read the article

  • SQL Spatial: Getting “nearest” calculations working properly

    - by Rob Farley
    If you’ve ever done spatial work with SQL Server, I hope you’ve come across the ‘nearest’ problem. You have five thousand stores around the world, and you want to identify the one that’s closest to a particular place. Maybe you want the store closest to the LobsterPot office in Adelaide, at -34.925806, 138.605073. Or our new US office, at 42.524929, -87.858244. Or maybe both! You know how to do this. You don’t want to use an aggregate MIN or MAX, because you want the whole row, telling you which store it is. You want to use TOP, and if you want to find the closest store for multiple locations, you use APPLY. Let’s do this (but I’m going to use addresses in AdventureWorks2012, as I don’t have a list of stores). Oh, and before I do, let’s make sure we have a spatial index in place. I’m going to use the default options. CREATE SPATIAL INDEX spin_Address ON Person.Address(SpatialLocation); And my actual query: WITH MyLocations AS (SELECT * FROM (VALUES ('LobsterPot Adelaide', geography::Point(-34.925806, 138.605073, 4326)),                        ('LobsterPot USA', geography::Point(42.524929, -87.858244, 4326))                ) t (Name, Geo)) SELECT l.Name, a.AddressLine1, a.City, s.Name AS [State], c.Name AS Country FROM MyLocations AS l CROSS APPLY (     SELECT TOP (1) *     FROM Person.Address AS ad     ORDER BY l.Geo.STDistance(ad.SpatialLocation)     ) AS a JOIN Person.StateProvince AS s     ON s.StateProvinceID = a.StateProvinceID JOIN Person.CountryRegion AS c     ON c.CountryRegionCode = s.CountryRegionCode ; Great! This is definitely working. I know both those City locations, even if the AddressLine1s don’t quite ring a bell. I’m sure I’ll be able to find them next time I’m in the area. But of course what I’m concerned about from a querying perspective is what’s happened behind the scenes – the execution plan. This isn’t pretty. It’s not using my index. It’s sucking every row out of the Address table TWICE (which sucks), and then it’s sorting them by the distance to find the smallest one. It’s not pretty, and it takes a while. Mind you, I do like the fact that it saw an indexed view it could use for the State and Country details – that’s pretty neat. But yeah – users of my nifty website aren’t going to like how long that query takes. The frustrating thing is that I know that I can use the index to find locations that are within a particular distance of my locations quite easily, and Microsoft recommends this for solving the ‘nearest’ problem, as described at http://msdn.microsoft.com/en-au/library/ff929109.aspx. Now, in the first example on this page, it says that the query there will use the spatial index. But when I run it on my machine, it does nothing of the sort. I’m not particularly impressed. But what we see here is that parallelism has kicked in. In my scenario, it’s split the data up into 4 threads, but it’s still slow, and not using my index. It’s disappointing. But I can persuade it with hints! If I tell it to FORCESEEK, or use my index, or even turn off the parallelism with MAXDOP 1, then I get the index being used, and it’s a thing of beauty! Part of the plan is here: It’s massive, and it’s ugly, and it uses a TVF… but it’s quick. The way it works is to hook into the GeodeticTessellation function, which is essentially finds where the point is, and works out through the spatial index cells that surround it. This then provides a framework to be able to see into the spatial index for the items we want. You can read more about it at http://msdn.microsoft.com/en-us/library/bb895265.aspx#tessellation – including a bunch of pretty diagrams. One of those times when we have a much more complex-looking plan, but just because of the good that’s going on. This tessellation stuff was introduced in SQL Server 2012. But my query isn’t using it. When I try to use the FORCESEEK hint on the Person.Address table, I get the friendly error: Msg 8622, Level 16, State 1, Line 1 Query processor could not produce a query plan because of the hints defined in this query. Resubmit the query without specifying any hints and without using SET FORCEPLAN. And I’m almost tempted to just give up and move back to the old method of checking increasingly large circles around my location. After all, I can even leverage multiple OUTER APPLY clauses just like I did in my recent Lookup post. WITH MyLocations AS (SELECT * FROM (VALUES ('LobsterPot Adelaide', geography::Point(-34.925806, 138.605073, 4326)),                        ('LobsterPot USA', geography::Point(42.524929, -87.858244, 4326))                ) t (Name, Geo)) SELECT     l.Name,     COALESCE(a1.AddressLine1,a2.AddressLine1,a3.AddressLine1),     COALESCE(a1.City,a2.City,a3.City),     s.Name AS [State],     c.Name AS Country FROM MyLocations AS l OUTER APPLY (     SELECT TOP (1) *     FROM Person.Address AS ad     WHERE l.Geo.STDistance(ad.SpatialLocation) < 1000     ORDER BY l.Geo.STDistance(ad.SpatialLocation)     ) AS a1 OUTER APPLY (     SELECT TOP (1) *     FROM Person.Address AS ad     WHERE l.Geo.STDistance(ad.SpatialLocation) < 5000     AND a1.AddressID IS NULL     ORDER BY l.Geo.STDistance(ad.SpatialLocation)     ) AS a2 OUTER APPLY (     SELECT TOP (1) *     FROM Person.Address AS ad     WHERE l.Geo.STDistance(ad.SpatialLocation) < 20000     AND a2.AddressID IS NULL     ORDER BY l.Geo.STDistance(ad.SpatialLocation)     ) AS a3 JOIN Person.StateProvince AS s     ON s.StateProvinceID = COALESCE(a1.StateProvinceID,a2.StateProvinceID,a3.StateProvinceID) JOIN Person.CountryRegion AS c     ON c.CountryRegionCode = s.CountryRegionCode ; But this isn’t friendly-looking at all, and I’d use the method recommended by Isaac Kunen, who uses a table of numbers for the expanding circles. It feels old-school though, when I’m dealing with SQL 2012 (and later) versions. So why isn’t my query doing what it’s supposed to? Remember the query... WITH MyLocations AS (SELECT * FROM (VALUES ('LobsterPot Adelaide', geography::Point(-34.925806, 138.605073, 4326)),                        ('LobsterPot USA', geography::Point(42.524929, -87.858244, 4326))                ) t (Name, Geo)) SELECT l.Name, a.AddressLine1, a.City, s.Name AS [State], c.Name AS Country FROM MyLocations AS l CROSS APPLY (     SELECT TOP (1) *     FROM Person.Address AS ad     ORDER BY l.Geo.STDistance(ad.SpatialLocation)     ) AS a JOIN Person.StateProvince AS s     ON s.StateProvinceID = a.StateProvinceID JOIN Person.CountryRegion AS c     ON c.CountryRegionCode = s.CountryRegionCode ; Well, I just wasn’t reading http://msdn.microsoft.com/en-us/library/ff929109.aspx properly. The following requirements must be met for a Nearest Neighbor query to use a spatial index: A spatial index must be present on one of the spatial columns and the STDistance() method must use that column in the WHERE and ORDER BY clauses. The TOP clause cannot contain a PERCENT statement. The WHERE clause must contain a STDistance() method. If there are multiple predicates in the WHERE clause then the predicate containing STDistance() method must be connected by an AND conjunction to the other predicates. The STDistance() method cannot be in an optional part of the WHERE clause. The first expression in the ORDER BY clause must use the STDistance() method. Sort order for the first STDistance() expression in the ORDER BY clause must be ASC. All the rows for which STDistance returns NULL must be filtered out. Let’s start from the top. 1. Needs a spatial index on one of the columns that’s in the STDistance call. Yup, got the index. 2. No ‘PERCENT’. Yeah, I don’t have that. 3. The WHERE clause needs to use STDistance(). Ok, but I’m not filtering, so that should be fine. 4. Yeah, I don’t have multiple predicates. 5. The first expression in the ORDER BY is my distance, that’s fine. 6. Sort order is ASC, because otherwise we’d be starting with the ones that are furthest away, and that’s tricky. 7. All the rows for which STDistance returns NULL must be filtered out. But I don’t have any NULL values, so that shouldn’t affect me either. ...but something’s wrong. I do actually need to satisfy #3. And I do need to make sure #7 is being handled properly, because there are some situations (eg, differing SRIDs) where STDistance can return NULL. It says so at http://msdn.microsoft.com/en-us/library/bb933808.aspx – “STDistance() always returns null if the spatial reference IDs (SRIDs) of the geography instances do not match.” So if I simply make sure that I’m filtering out the rows that return NULL… …then it’s blindingly fast, I get the right results, and I’ve got the complex-but-brilliant plan that I wanted. It just wasn’t overly intuitive, despite being documented. @rob_farley

    Read the article

  • Python dictionary key missing

    - by Greg K
    I thought I'd put together a quick script to consolidate the CSS rules I have distributed across multiple CSS files, then I can minify it. I'm new to Python but figured this would be a good exercise to try a new language. My main loop isn't parsing the CSS as I thought it would. I populate a list with selectors parsed from the CSS files to return the CSS rules in order. Later in the script, the list contains an element that is not found in the dictionary. for line in self.file.readlines(): if self.hasSelector(line): selector = self.getSelector(line) if selector not in self.order: self.order.append(selector) elif selector and self.hasProperty(line): # rules.setdefault(selector,[]).append(self.getProperty(line)) property = self.getProperty(line) properties = [] if selector not in rules else rules[selector] if property not in properties: properties.append(property) rules[selector] = properties # print "%s :: %s" % (selector, "".join(rules[selector])) return rules Error encountered: $ css-combine combined.css test1.css test2.css Traceback (most recent call last): File "css-combine", line 108, in <module> c.run(outfile, stylesheets) File "css-combine", line 64, in run [(selector, rules[selector]) for selector in parser.order], KeyError: 'p' Swap the inputs: $ css-combine combined.css test2.css test1.css Traceback (most recent call last): File "css-combine", line 108, in <module> c.run(outfile, stylesheets) File "css-combine", line 64, in run [(selector, rules[selector]) for selector in parser.order], KeyError: '#header_.title' I've done some quirky things in the code like sub spaces for underscores in dictionary key names in case it was an issue - maybe this is a benign precaution? Depending on the order of the inputs, a different key cannot be found in the dictionary. The script: #!/usr/bin/env python import optparse import re class CssParser: def __init__(self): self.file = False self.order = [] # store rules assignment order def parse(self, rules = {}): if self.file == False: raise IOError("No file to parse") selector = False for line in self.file.readlines(): if self.hasSelector(line): selector = self.getSelector(line) if selector not in self.order: self.order.append(selector) elif selector and self.hasProperty(line): # rules.setdefault(selector,[]).append(self.getProperty(line)) property = self.getProperty(line) properties = [] if selector not in rules else rules[selector] if property not in properties: properties.append(property) rules[selector] = properties # print "%s :: %s" % (selector, "".join(rules[selector])) return rules def hasSelector(self, line): return True if re.search("^([#a-z,\.:\s]+){", line) else False def getSelector(self, line): s = re.search("^([#a-z,:\.\s]+){", line).group(1) return "_".join(s.strip().split()) def hasProperty(self, line): return True if re.search("^\s?[a-z-]+:[^;]+;", line) else False def getProperty(self, line): return re.search("([a-z-]+:[^;]+;)", line).group(1) class Consolidator: """Class to consolidate CSS rule attributes""" def run(self, outfile, files): parser = CssParser() rules = {} for file in files: try: parser.file = open(file) rules = parser.parse(rules) except IOError: print "Cannot read file: " + file finally: parser.file.close() self.serialize( [(selector, rules[selector]) for selector in parser.order], outfile ) def serialize(self, rules, outfile): try: f = open(outfile, "w") for rule in rules: f.write( "%s {\n\t%s\n}\n\n" % ( " ".join(rule[0].split("_")), "\n\t".join(rule[1]) ) ) except IOError: print "Cannot write output to: " + outfile finally: f.close() def init(): op = optparse.OptionParser( usage="Usage: %prog [options] <output file> <stylesheet1> " + "<stylesheet2> ... <stylesheetN>", description="Combine CSS rules spread across multiple " + "stylesheets into a single file" ) opts, args = op.parse_args() if len(args) < 3: if len(args) == 1: print "Error: No input files specified.\n" elif len(args) == 2: print "Error: One input file specified, nothing to combine.\n" op.print_help(); exit(-1) return [opts, args] if __name__ == '__main__': opts, args = init() outfile, stylesheets = [args[0], args[1:]] c = Consolidator() c.run(outfile, stylesheets) Test CSS file 1: body { background-color: #e7e7e7; } p { margin: 1em 0em; } File 2: body { font-size: 16px; } #header .title { font-family: Tahoma, Geneva, sans-serif; font-size: 1.9em; } #header .title a, #header .title a:hover { color: #f5f5f5; border-bottom: none; text-shadow: 2px 2px 3px rgba(0, 0, 0, 1); } Thanks in advance.

    Read the article

  • Oracle Data Integrator 11.1.1.5 Complex Files as Sources and Targets

    - by Alex Kotopoulis
    Overview ODI 11.1.1.5 adds the new Complex File technology for use with file sources and targets. The goal is to read or write file structures that are too complex to be parsed using the existing ODI File technology. This includes: Different record types in one list that use different parsing rules Hierarchical lists, for example customers with nested orders Parsing instructions in the file data, such as delimiter types, field lengths, type identifiers Complex headers such as multiple header lines or parseable information in header Skipping of lines  Conditional or choice fields Similar to the ODI File and XML File technologies, the complex file parsing is done through a JDBC driver that exposes the flat file as relational table structures. Complex files are mapped to one or more table structures, as opposed to the (simple) file technology, which always has a one-to-one relationship between file and table. The resulting set of tables follows the same concept as the ODI XML driver, table rows have additional PK-FK relationships to express hierarchy as well as order values to maintain the file order in the resulting table.   The parsing instruction format used for complex files is the nXSD (native XSD) format that is already in use with Oracle BPEL. This format extends the XML Schema standard by adding additional parsing instructions to each element. Using nXSD parsing technology, the native file is converted into an internal XML format. It is important to understand that the XML is streamed to improve performance; there is no size limitation of the native file based on memory size, the XML data is never fully materialized.  The internal XML is then converted to relational schema using the same mapping rules as the ODI XML driver. How to Create an nXSD file Complex file models depend on the nXSD schema for the given file. This nXSD file has to be created using a text editor or the Native Format Builder Wizard that is part of Oracle BPEL. BPEL is included in the ODI Suite, but not in standalone ODI Enterprise Edition. The nXSD format extends the standard XSD format through nxsd attributes. NXSD is a valid XML Schema, since the XSD standard allows extra attributes with their own namespaces. The following is a sample NXSD schema: <?xml version="1.0"?> <xsd:schema xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:nxsd="http://xmlns.oracle.com/pcbpel/nxsd" elementFormDefault="qualified" xmlns:tns="http://xmlns.oracle.com/pcbpel/demoSchema/csv" targetNamespace="http://xmlns.oracle.com/pcbpel/demoSchema/csv" attributeFormDefault="unqualified" nxsd:encoding="US-ASCII" nxsd:stream="chars" nxsd:version="NXSD"> <xsd:element name="Root">         <xsd:complexType><xsd:sequence>       <xsd:element name="Header">                 <xsd:complexType><xsd:sequence>                         <xsd:element name="Branch" type="xsd:string" nxsd:style="terminated" nxsd:terminatedBy=","/>                         <xsd:element name="ListDate" type="xsd:string" nxsd:style="terminated" nxsd:terminatedBy="${eol}"/>                         </xsd:sequence></xsd:complexType>                         </xsd:element>                 </xsd:sequence></xsd:complexType>         <xsd:element name="Customer" maxOccurs="unbounded">                 <xsd:complexType><xsd:sequence>                 <xsd:element name="Name" type="xsd:string" nxsd:style="terminated" nxsd:terminatedBy=","/>                         <xsd:element name="Street" type="xsd:string" nxsd:style="terminated" nxsd:terminatedBy="," />                         <xsd:element name="City" type="xsd:string" nxsd:style="terminated" nxsd:terminatedBy="${eol}" />                         </xsd:sequence></xsd:complexType>                         </xsd:element>                 </xsd:sequence></xsd:complexType> </xsd:element> </xsd:schema> The nXSD schema annotates elements to describe their position and delimiters within the flat text file. The schema above uses almost exclusively the nxsd:terminatedBy instruction to look for the next terminator chars. There are various constructs in nXSD to parse fixed length fields, look ahead in the document for string occurences, perform conditional logic, use variables to remember state, and many more. nXSD files can either be written manually using an XML Schema Editor or created using the Native Format Builder Wizard. Both Native Format Builder Wizard as well as the nXSD language are described in the Application Server Adapter Users Guide. The way to start the Native Format Builder in BPEL is to create a new File Adapter; in step 8 of the Adapter Configuration Wizard a new Schema for Native Format can be created:   The Native Format Builder guides through a number of steps to generate the nXSD based on a sample native file. If the format is complex, it is often a good idea to “approximate” it with a similar simple format and then add the complex components manually.  The resulting *.xsd file can be copied and used as the format for ODI, other BPEL constructs such as the file adapter definition are not relevant for ODI. Using this technique it is also possible to parse the same file format in SOA Suite and ODI, for example using SOA for small real-time messages, and ODI for large batches. This nXSD schema in this example describes a file with a header row containing data and 3 string fields per row delimited by commas, for example: Redwood City Downtown Branch, 06/01/2011 Ebeneezer Scrooge, Sandy Lane, Atherton Tiny Tim, Winton Terrace, Menlo Park The ODI Complex File JDBC driver exposes the file structure through a set of relational tables with PK-FK relationships. The tables for this example are: Table ROOT (1 row): ROOTPK Primary Key for root element SNPSFILENAME Name of the file SNPSFILEPATH Path of the file SNPSLOADDATE Date of load Table HEADER (1 row): ROOTFK Foreign Key to ROOT record ROWORDER Order of row in native document BRANCH Data BRANCHORDER Order of Branch within row LISTDATE Data LISTDATEORDER Order of ListDate within row Table ADDRESS (2 rows): ROOTFK Foreign Key to ROOT record ROWORDER Order of row in native document NAME Data NAMEORDER Oder of Name within row STREET Data STREETORDER Order of Street within row CITY Data CITYORDER Order of City within row Every table has PK and/or FK fields to reflect the document hierarchy through relationships. In this example this is trivial since the HEADER and all CUSTOMER records point back to the PK of ROOT. Deeper nested documents require this to identify parent elements. All tables also have a ROWORDER field to define the order of rows, as well as order fields for each column, in case the order of columns varies in the original document and needs to be maintained. If order is not relevant, these fields can be ignored. How to Create an Complex File Data Server in ODI After creating the nXSD file and a test data file, and storing it on the local file system accessible to ODI, you can go to the ODI Topology Navigator to create a Data Server and Physical Schema under the Complex File technology. This technology follows the conventions of other ODI technologies and is very similar to the XML technology. The parsing settings such as the source native file, the nXSD schema file, the root element, as well as the external database can be set in the JDBC URL: The use of an external database defined by dbprops is optional, but is strongly recommended for production use. Ideally, the staging database should be used for this. Also, when using a complex file exclusively for read purposes, it is recommended to use the ro=true property to ensure the file is not unnecessarily synchronized back from the database when the connection is closed. A data file is always required to be present  at the filename path during design-time. Without this file, operations like testing the connection, reading the model data, or reverse engineering the model will fail.  All properties of the Complex File JDBC Driver are documented in the Oracle Fusion Middleware Connectivity and Knowledge Modules Guide for Oracle Data Integrator in Appendix C: Oracle Data Integrator Driver for Complex Files Reference. David Allan has created a great viewlet Complex File Processing - 0 to 60 which shows the creation of a Complex File data server as well as a model based on this server. How to Create Models based on an Complex File Schema Once physical schema and logical schema have been created, the Complex File can be used to create a Model as if it were based on a database. When reverse-engineering the Model, data stores(tables) for each XSD element of complex type will be created. Use of complex files as sources is straightforward; when using them as targets it has to be made sure that all dependent tables have matching PK-FK pairs; the same applies to the XML driver as well. Debugging and Error Handling There are different ways to test an nXSD file. The Native Format Builder Wizard can be used even if the nXSD wasn’t created in it; it will show issues related to the schema and/or test data. In ODI, the nXSD  will be parsed and run against the existing test XML file when testing a connection in the Dataserver. If either the nXSD has an error or the data is non-compliant to the schema, an error will be displayed. Sample error message: Error while reading native data. [Line=1, Col=5] Not enough data available in the input, when trying to read data of length "19" for "element with name D1" from the specified position, using "style" as "fixedLength" and "length" as "". Ensure that there is enough data from the specified position in the input. Complex File FAQ Is the size of the native file limited by available memory? No, since the native data is streamed through the driver, only the available space in the staging database limits the size of the data. There are limits on individual field sizes, though; a single large object field needs to fit in memory. Should I always use the complex file driver instead of the file driver in ODI now? No, use the file technology for all simple file parsing tasks, for example any fixed-length or delimited files that just have one row format and can be mapped into a simple table. Because of its narrow assumptions the ODI file driver is easy to configure within ODI and can stream file data without writing it into a database. The complex file driver should be used whenever the use case cannot be handled through the file driver. Are we generating XML out of flat files before we write it into a database? We don’t materialize any XML as part of parsing a flat file, either in memory or on disk. The data produced by the XML parser is streamed in Java objects that just use XSD-derived nXSD schema as its type system. We use the nXSD schema because is the standard for describing complex flat file metadata in Oracle Fusion Middleware, and enables users to share schemas across products. Is the nXSD file interchangeable with SOA Suite? Yes, ODI can use the same nXSD files as SOA Suite, allowing mixed use cases with the same data format. Can I start the Native Format Builder from the ODI Studio? No, the Native Format Builder has to be started from a JDeveloper with BPEL instance. You can get BPEL as part of the SOA Suite bundle. Users without SOA Suite can manually develop nXSD files using XSD editors. When is the database data written back to the native file? Data is synchronized using the SYNCHRONIZE and CREATE FILE commands, and when the JDBC connection is closed. It is recommended to set the ro or read_only property to true when a file is exclusively used for reading so that no unnecessary write-backs occur. Is the nXSD metadata part of the ODI Master or Work Repository? No, the data server definition in the master repository only contains the JDBC URL with file paths; the nXSD files have to be accessible on the file systems where the JDBC driver is executed during production, either by copying or by using a network file system. Where can I find sample nXSD files? The Application Server Adapter Users Guide contains nXSD samples for various different use cases.

    Read the article

  • How to configure a Custom Datacontract Serializer or XMLSerializer

    - by user364445
    Im haveing some xml that have this structure <Person Id="*****" Name="*****"> <AccessControlEntries> <AccessControlEntry Id="*****" Name="****"/> </AccessControlEntries> <AccessControls /> <IdentityGroups> <IdentityGroup Id="****" Name="*****" /> </IdentityGroups></Person> and i also have this entities [DataContract(IsReference = true)] public abstract class EntityBase { protected bool serializing; [DataMember(Order = 1)] [XmlAttribute()] public string Id { get; set; } [DataMember(Order = 2)] [XmlAttribute()] public string Name { get; set; } [OnDeserializing()] public void OnDeserializing(StreamingContext context) { this.Initialize(); } [OnSerializing()] public void OnSerializing(StreamingContext context) { this.serializing = true; } [OnSerialized()] public void OnSerialized(StreamingContext context) { this.serializing = false; } public abstract void Initialize(); public string ToXml() { var settings = new System.Xml.XmlWriterSettings(); settings.Indent = true; settings.OmitXmlDeclaration = true; var sb = new System.Text.StringBuilder(); using (var writer = System.Xml.XmlWriter.Create(sb, settings)) { var serializer = new XmlSerializer(this.GetType()); serializer.Serialize(writer, this); } return sb.ToString(); } } [DataContract()] public abstract class Identity : EntityBase { private EntitySet<AccessControlEntry> accessControlEntries; private EntitySet<IdentityGroup> identityGroups; public Identity() { Initialize(); } [DataMember(Order = 3, EmitDefaultValue = false)] [Association(Name = "AccessControlEntries")] public EntitySet<AccessControlEntry> AccessControlEntries { get { if ((this.serializing && (this.accessControlEntries==null || this.accessControlEntries.HasLoadedOrAssignedValues == false))) { return null; } return accessControlEntries; } set { accessControlEntries.Assign(value); } } [DataMember(Order = 4, EmitDefaultValue = false)] [Association(Name = "IdentityGroups")] public EntitySet<IdentityGroup> IdentityGroups { get { if ((this.serializing && (this.identityGroups == null || this.identityGroups.HasLoadedOrAssignedValues == false))) { return null; } return identityGroups; } set { identityGroups.Assign(value); } } private void attach_accessControlEntry(AccessControlEntry entity) { entity.Identities.Add(this); } private void dettach_accessControlEntry(AccessControlEntry entity) { entity.Identities.Remove(this); } private void attach_IdentityGroup(IdentityGroup entity) { entity.MemberIdentites.Add(this); } private void dettach_IdentityGroup(IdentityGroup entity) { entity.MemberIdentites.Add(this); } public override void Initialize() { this.accessControlEntries = new EntitySet<AccessControlEntry>( new Action<AccessControlEntry>(this.attach_accessControlEntry), new Action<AccessControlEntry>(this.dettach_accessControlEntry)); this.identityGroups = new EntitySet<IdentityGroup>( new Action<IdentityGroup>(this.attach_IdentityGroup), new Action<IdentityGroup>(this.dettach_IdentityGroup)); } } [XmlType(TypeName = "AccessControlEntry")] public class AccessControlEntry : EntityBase, INotifyPropertyChanged { private EntitySet<Service> services; private EntitySet<Identity> identities; private EntitySet<Permission> permissions; public AccessControlEntry() { services = new EntitySet<Service>(new Action<Service>(attach_Service), new Action<Service>(dettach_Service)); identities = new EntitySet<Identity>(new Action<Identity>(attach_Identity), new Action<Identity>(dettach_Identity)); permissions = new EntitySet<Permission>(new Action<Permission>(attach_Permission), new Action<Permission>(dettach_Permission)); } [DataMember(Order = 3, EmitDefaultValue = false)] public EntitySet<Permission> Permissions { get { if ((this.serializing && (this.permissions.HasLoadedOrAssignedValues == false))) { return null; } return permissions; } set { permissions.Assign(value); } } [DataMember(Order = 4, EmitDefaultValue = false)] public EntitySet<Identity> Identities { get { if ((this.serializing && (this.identities.HasLoadedOrAssignedValues == false))) { return null; } return identities; } set { identities.Assign(identities); } } [DataMember(Order = 5, EmitDefaultValue = false)] public EntitySet<Service> Services { get { if ((this.serializing && (this.services.HasLoadedOrAssignedValues == false))) { return null; } return services; } set { services.Assign(value); } } private void attach_Permission(Permission entity) { entity.AccessControlEntires.Add(this); } private void dettach_Permission(Permission entity) { entity.AccessControlEntires.Remove(this); } private void attach_Identity(Identity entity) { entity.AccessControlEntries.Add(this); } private void dettach_Identity(Identity entity) { entity.AccessControlEntries.Remove(this); } private void attach_Service(Service entity) { entity.AccessControlEntries.Add(this); } private void dettach_Service(Service entity) { entity.AccessControlEntries.Remove(this); } #region INotifyPropertyChanged Members public event PropertyChangedEventHandler PropertyChanged; protected void OnPropertyChanged(string name) { PropertyChangedEventHandler handler = PropertyChanged; if (handler != null) handler(this, new PropertyChangedEventArgs(name)); } #endregion public override void Initialize() { throw new NotImplementedException(); } } [DataContract()] [XmlType(TypeName = "Person")] public class Person : Identity { private EntityRef<Login> login; [DataMember(Order = 3)] [XmlAttribute()] public string Nombre { get; set; } [DataMember(Order = 4)] [XmlAttribute()] public string Apellidos { get; set; } [DataMember(Order = 5)] public Login Login { get { return login.Entity; } set { var previousValue = this.login.Entity; if (((previousValue != value) || (this.login.HasLoadedOrAssignedValue == false))) { if ((previousValue != null)) { this.login.Entity = null; previousValue.Person = null; } this.login.Entity = value; if ((value != null)) value.Person = this; } } } public override void Initialize() { base.Initialize(); } } [DataContract()] [XmlType(TypeName = "Login")] public class Login : EntityBase { private EntityRef<Person> person; [DataMember(Order = 3)] public string UserID { get; set; } [DataMember(Order = 4)] public string Contrasena { get; set; } [DataMember(Order = 5)] public Domain Dominio { get; set; } public Person Person { get { return person.Entity; } set { var previousValue = this.person.Entity; if (((previousValue != value) || (this.person.HasLoadedOrAssignedValue == false))) { if ((previousValue != null)) { this.person.Entity = null; previousValue.Login = null; } this.person.Entity = value; if ((value != null)) value.Login = this; } } } public override void Initialize() { throw new NotImplementedException(); } } [DataContract()] [XmlType(TypeName = "IdentityGroup")] public class IdentityGroup : Identity { private EntitySet<Identity> memberIdentities; public IdentityGroup() { Initialize(); } public override void Initialize() { this.memberIdentities = new EntitySet<Identity>(new Action<Identity>(this.attach_Identity), new Action<Identity>(this.dettach_Identity)); } [DataMember(Order = 3, EmitDefaultValue = false)] [Association(Name = "MemberIdentities")] public EntitySet<Identity> MemberIdentites { get { if ((this.serializing && (this.memberIdentities.HasLoadedOrAssignedValues == false))) { return null; } return memberIdentities; } set { memberIdentities.Assign(value); } } private void attach_Identity(Identity entity) { entity.IdentityGroups.Add(this); } private void dettach_Identity(Identity entity) { entity.IdentityGroups.Remove(this); } } [DataContract()] [XmlType(TypeName = "Group")] public class Group : Identity { public override void Initialize() { throw new NotImplementedException(); } } but the ToXml() response something like this <Person xmlns:xsi="************" xmlns:xsd="******" ID="******" Name="*****"/><AccessControlEntries/></Person> but what i want is something like this <Person Id="****" Name="***" Nombre="****"> <AccessControlEntries/> <IdentityGroups/> </Person>

    Read the article

  • strange behaviour of grep in UNIX

    - by Happy Mittal
    When I type a command $ grep \\h junk then shell should interpret \\h as \h as two pairs of \ become \ each, and grep in turn, should interpret \h as \h as \ becomes \, so grep should search for a pattern \h in junk, which it is doing successfully. But it's not working for \\$. Please explain why ?

    Read the article

  • strange behaviour of grep in UNIX

    - by Happy Mittal
    When I type a command $ grep \h junk then shell should interpret \h as \h as two pairs of \ become \ each, and grep in turn, should interpret \h as \h as \ becomes \, so grep should search for a pattern \h in junk, which it is doing successfully. But it's not working for \$. Please explain why ?

    Read the article

  • Asp.Net MVC - Rob Conery's LazyList - Count() or Count

    - by Adam
    I'm trying to create an html table for order logs for customers. A customer is defined as (I've left out a lot of stuff): public class Customer { public LazyList<Order> Orders { get; set; } } The LazyList is set when fetching a Customer: public Customer GetCustomer(int custID) { Customer c = ... c.Orders = new LazyList<Order>(_repository.GetOrders().ByOrderID(custID)); return c; } The order log model: public class OrderLogTableModel { public OrderLogTableModel(LazyList<Order> orders) { Orders = orders; Page = 0; PageSize = 25; } public LazyList<Order> Orders { get; set; } public int Page { get; set; } public int PageSize { get; set; } } and I pass in the customer.Orders after loading a customer. Now the log i'm trying to make, looks something like: <table> <tbody> <% int rowCount = ViewData.Model.Orders.Count(); int innerRows = rowCount - (ViewData.Model.Page * ViewData.Model.PageSize); foreach (Order order in ViewData.Model.Orders.OrderByDescending(x => x.StartDateTime) .Take(innerRows).OrderBy(x => x.StartDateTime) .Take(ViewData.Model.PageSize)) { %> <tr> <td> <%= order.ID %> </td> </tr> <% } %> </tbody> </table> Which works fine. But the problem is evaluating ViewData.Model.Orders.Count() literally takes about 10 minutes. I've tried with the ViewData.Model.Orders.Count property instead, and the results are the same - takes forever. I've also tried calling _repository.GetOrders().ByCustomerID(custID).Count() directly from the view and that executes perfectly within a few ms. Can anybody see any reason why using the LazyList to get a simple count would take so long? It seems like its trying to iterate through the list when getting a simple count.

    Read the article

  • Initialize Pointer Through Function

    - by SoulBeaver
    I was browsing my teacher's code when I stumbled across this: Order* order1 = NULL; then order1 = order(customer1, product2); which calls Order* order(Customer* customer, Product* product) { return new Order(customer, product); } This looks like silly code. I'm not sure why, but the teacher initialized all pointers to NULL instead of declaring them right away(looking at the code it's entirely possible, but he chose not to). My question is: is this good or acceptable code? Does the function call have any benefits over calling a constructor explicitely? And how does new work in this case? Can I imagine the code now as kind of like: order1 = new Order(customer, product);

    Read the article

  • Ubercart role aissgnment issue

    - by minnur
    Hi There! I have an issue with expiration date in emails. For example, I have a subscription which expires on 1st FEB 2010 and I am purchasing new subscription (renewing my role). When order is complete Ubercart CA (Condition actions) sends an email about role renewal and new expiration date ([role-expiration-short]). But the message contains wrong expiration date I've noticed that on each order email contains N-1 expiration, where N - current purchase. This is email message: [order-first-name] [order-last-name], Thanks to your order, [order-link], at [store-name] you have renewed the role, [role-name]. It is now set to expire on [role-expiration-short] <- ISSUE IS HERE. Thanks again, [store-name] [site-slogan] Any ideas? Thanks!

    Read the article

  • Is this type of calculation to be put in Model or Controller?

    - by Hadi
    i have Product and SalesOrder model (to simplify, 1 sales_order only for 1 product) Product has_many SalesOrder SalesOrder belongs_to Product pa = Product A #2000 so1 = SalesOrder 1 order product A #1000, date:yesterday so2 = SalesOrder 2 order product A #999, date:yesterday so3 = SalesOrder 3 order product A #1000, date:now Based on the date, pa.find_sales_orders_that_can_be_delivered will give: SalesOrder 1 order product A #1000, date:yesterday SalesOrder 2 order product A #999, date:yesterday SalesOrder 3 order product A #1, date:now <-- the newest The question is: is find_sales_orders_that_can_be_delivered should be in the Model? i can do it in controller. and the general question is: what goes in Model and what goes in Controller. Thank you

    Read the article

  • Execution Plan Optimization when where clause is removed then added back

    - by nmushov
    I have a stored procedure that uses a table valued function which executes in 9 seconds. If I alter the table valued function and remove the where clause, the stored procedure executes in 3 seconds. If I add the where clause back, the query still executes in 3 seconds. I took a look at the execution plans and it appears that after I remove the where clause, the execution plan includes parallelism and the scan count for 2 of my tables drops for 50000 and 65000 down to 5 and 3. After I add the where clause back, the optimized execution plan still runs unless I run DBCC FREEPROCCACHE. Questions 1. Why would SQL Server start using the optimized execution plan for both queries only when I first remove the where clause? Is there a way to force SQL Server to use this execution plan? Also, this is a paramaterized all-in-one query that uses the (Parameter is null or Parameter) in the where clause, which I believe is bad for performance. RETURNS TABLE AS RETURN ( SELECT TOP (@PageNumber * @PageSize) CASE WHEN @SortOrder = 'Expensive' THEN ROW_NUMBER() OVER (ORDER BY SellingPrice DESC) WHEN @SortOrder = 'Inexpensive' THEN ROW_NUMBER() OVER (ORDER BY SellingPrice ASC) WHEN @SortOrder = 'LowMiles' THEN ROW_NUMBER() OVER (ORDER BY Mileage ASC) WHEN @SortOrder = 'HighMiles' THEN ROW_NUMBER() OVER (ORDER BY Mileage DESC) WHEN @SortOrder = 'Closest' THEN ROW_NUMBER() OVER (ORDER BY P1.Distance ASC) WHEN @SortOrder = 'Newest' THEN ROW_NUMBER() OVER (ORDER BY [Year] DESC) WHEN @SortOrder = 'Oldest' THEN ROW_NUMBER() OVER (ORDER BY [Year] ASC) ELSE ROW_NUMBER() OVER (ORDER BY InventoryID ASC) END as rn, P1.InventoryID, P1.SellingPrice, P1.Distance, P1.Mileage, Count(*) OVER () RESULT_COUNT, dimCarStatus.[year] FROM (SELECT InventoryID, SellingPrice, Zip.Distance, Mileage, ColorKey, CarStatusKey, CarKey FROM facInventory JOIN @ZipCodes Zip ON Zip.DealerKey = facInventory.DealerKey) as P1 JOIN dimColor ON dimColor.ColorKey = P1.ColorKey JOIN dimCarStatus ON dimCarStatus.CarStatusKey = P1.CarStatusKey JOIN dimCar ON dimCar.CarKey = P1.CarKey WHERE (@ExteriorColor is NULL OR dimColor.ExteriorColor like @ExteriorColor) AND (@InteriorColor is NULL OR dimColor.InteriorColor like @InteriorColor) AND (@Condition is NULL OR dimCarStatus.Condition like @Condition) AND (@Year is NULL OR dimCarStatus.[Year] like @Year) AND (@Certified is NULL OR dimCarStatus.Certified like @Certified) AND (@Make is NULL OR dimCar.Make like @Make) AND (@ModelCategory is NULL OR dimCar.ModelCategory like @ModelCategory) AND (@Model is NULL OR dimCar.Model like @Model) AND (@Trim is NULL OR dimCar.Trim like @Trim) AND (@BodyType is NULL OR dimCar.BodyType like @BodyType) AND (@VehicleTypeCode is NULL OR dimCar.VehicleTypeCode like @VehicleTypeCode) AND (@MinPrice is NULL OR P1.SellingPrice >= @MinPrice) AND (@MaxPrice is NULL OR P1.SellingPrice < @MaxPrice) AND (@Mileage is NULL OR P1.Mileage < @Mileage) ORDER BY CASE WHEN @SortOrder = 'Expensive' THEN -SellingPrice WHEN @SortOrder = 'Inexpensive' THEN SellingPrice WHEN @SortOrder = 'LowMiles' THEN Mileage WHEN @SortOrder = 'HighMiles' THEN -Mileage WHEN @SortOrder = 'Closest' THEN P1.Distance WHEN @SortOrder = 'Newest' THEN -[YEAR] WHEN @SortOrder = 'Oldest' THEN [YEAR] ELSE InventoryID END )

    Read the article

  • MODX parse error function implode (is it me or modx?)

    - by Ian
    Hi, I cannot for the life of me figure this out, maybe someone can help. Using MODX a form takes user criteria to create a filter and return a list of documents. The form is one text field and a few checkboxes. If both text field and checkbox data is posted, the function works fine; if just the checkbox data is posted the function works fine; but if just the text field data is posted, modx gives me the following error: Error: implode() [function.implode]: Invalid arguments passed. I've tested this outside of modx with flat files and it all works fine leading me to assume a bug exists within modx. But I'm not convinced. Here's my code: <?php $order = array('price ASC'); //default sort order if(!empty($_POST['tour_finder_duration'])){ //duration submitted $days = htmlentities($_POST['tour_finder_duration']); //clean up post array_unshift($order,"duration DESC"); //add duration sort before default $filter[] = 'duration,'.$days.',4'; //add duration to filter[] (field,criterion,mode) $criteria[] = 'Number of days: <strong>'.$days.'</strong>'; //displayed on results page } if(!empty($_POST['tour_finder_dests'])){ //destination/s submitted $dests = $_POST['tour_finder_dests']; foreach($dests as $value){ //iterate through dests array $filter[] = 'searchDests,'.htmlentities($value).',7'; //add dests to filter[] $params['docid'] = $value; $params['field'] = 'pagetitle'; $pagetitle = $modx->runSnippet('GetField',$params); $dests_array[] = '<a href="[~'.$value.'~]" title="Read more about '.$pagetitle.'" class="tourdestlink">'.$pagetitle.'</a>'; } $dests_array = implode(', ',$dests_array); $criteria[] = 'Destinations: '.$dests_array; //displayed on results page } if(is_array($filter)){ $filter = implode('|',$filter);//pipe-separated string } if(is_array($order)){ $order = implode(',',$order);//comma-separated string } if(is_array($criteria)){ $criteria = implode('<br />',$criteria); } echo '<br />Order: '.$order.'<br /> Filter: '.$filter.'<br /> Criteria: '.$criteria; //next: extract docs using $filter and $order, display user's criteria using $criteria... ?> The echo statement is displayed above the MODX error message and the $filter array is correctly imploded. Any help will save my computer from flying out the window. Thanks

    Read the article

  • iOS - Application logging test and production code

    - by Peter Warbo
    I am doing a bunch of logging when I'm testing my application which is useful for getting information about variable state and such. However I have read that you should use logging sparsely in production code (because it can potentially slow down your application). But my question is now: if my app is in production and people are using it, whenever a crash (god forbid) occurs, how will I be able to interpret the crash information if I have removed the logging statements? Then I suppose I will only have a stacktrace for me to interpret? Does this mean I should leave logging in production code only WHERE it's really essential for me to interpret what has happened? Also how will the logging statements relate to the crash reports? Will they be combined? I'm thinking of using Flurry as analytics and crash reports...

    Read the article

  • strtotime fails for valid date

    - by Funky Dude
    i am doing a project where i need to output date of orders. and i do the following inside a for loop <?php echo date('M d, Y g:i A',strtotime($order['Order']['created']));?> for some strange reason, sttotime returns false. (Dec 31, 1969 7:00 PM appears instead.) i made sure $order['Order']['created'] is not empty and is valid. even stranger, that exact same piece of code works fine on the other page, only different is that, that one is not in a loop. but that cant be the reason right? i set timezone to America/New_York and $order['Order']['created'] is mysql timestamp. var_dump on said variable string(27) "2010-06-16 20:12:51"

    Read the article

  • Excluding a specific substring from a regex

    - by Matt S
    I'm attempting to mangle a SQL query via regex. My goal is essentially grab what is between FROM and ORDER BY, if ORDER BY exists. So, for example for the query: SELECT * FROM TableA WHERE ColumnA=42 ORDER BY ColumnB it should capture TableA WHERE ColumnA=42, and it should also capture if the ORDER BY expression isn't there. The closest I've been able to come is SELECT (.*) FROM (.*)(?=(ORDER BY)) which fails without the ORDER BY. Hopefully I'm missing something obvious. I've been hammering in Expresso for the past hour trying to get this.

    Read the article

< Previous Page | 62 63 64 65 66 67 68 69 70 71 72 73  | Next Page >